text
stringlengths 83
79.5k
|
---|
H: Minimizing area of a triangle with two fixed point and a point on parabola
A triangle is made up of three points, $A, B$, and $P$.
$A(-1, 0)$
$B(0, 1)$
$P$ is a point on $y^2 = x$
Minimize the area of Triangle $ABP$.
My approach is far too complicated, which generalize $P$ as $(t^2, t)$, which makes the whole thing quite messy. Maybe that's the right thing to do but I'm looking at the whole thing wrong.
AI: HINT:
$1$st of all, it should be $(t^2,t)$
Now the area $$\frac12\left|\det\begin{pmatrix} -1 & 0 & 1 \\ 0 & 1 & 1 \\ t^2 & t&1\end{pmatrix}\right|$$
$$=\frac12 (t^2-t+1)=\frac{(2t-1)^2+3}8\ge \frac38$$
So, the area will be minimum if $t=\frac12$ |
H: Operators such that $T\circ S=I$ but $S\circ T\neq I$.
Suppose $S$ and $T$ are linear operators on a vector space $V$ and $T\circ S=I$ where $I$ is the identity map. It's easy to see that $S$ is one-to-one.
If $V$ is finite dimensional, rank-nullity implies $S$ is invertible, so a little manipulation shows that $S\circ T=I$.
Makes me curious, if $V$ is infinite dimensional, is there an example of $T$ and $S$ such that $T\circ S=I$ but $S\circ T\neq I$?
AI: Let $V=l_2$ (this is the space of sequences $(x_1,x_2,\ldots)$ of complex numbers such that $\Sigma_i |x_i|^2$ is finite), $S(x_1,x_2,\ldots) = (0,x_1,x_2,\ldots)$, and $T(x_1,x_2,\ldots) = (x_2,x_3,\ldots)$. Then $TS=I$, but $ST(x_1,x_2,\ldots) = (0,x_2,\ldots)$. |
H: A numerical analysis problem
I was looking at old exam papers and was stuck on the following problem:
I have hardly any idea how to progress with the problem. Can some give some explanation about how to progress with the problem?
AI: I am not sure what "large" and "small" mean precisely here, but I would imagine that this is getting at the operator norm. Given a matrix $A$, there is the operator norm $\|A\|$ which satisfies $$\|Ax \| \leq \|A\|\|x\|$$ for any $x$. In the language of the question, we have the equation $$\|r\| = \|A(x - x_c)\| = \|Ae\| \leq \|A\|\|e\|.$$ If we push the vector $r$ to zero, what happens to $e$, given this inequality? If we push $e$ to zero, what happens to $r$? |
H: Can't simplify this boolean algebra equation
So I've got an expression I have been trying to simplify and have the answer but I can't figure out how to get to it... can anyone help me out?
Equation: $(A\wedge \lnot B \wedge \lnot C \wedge \lnot D) \vee (C \wedge \lnot D) \vee (C \wedge \lnot D) = (A \wedge \lnot B \wedge \lnot D) \vee (C \wedge \lnot D)$
AI: A Karnaugh map can help. You have already invoked Idempotent Law to eliminate the repeated term $(C \wedge \lnot D)$. Let $T$ represent a tautology (always true). Then to simplify further, observe that:
$$ \begin{align*}
(C \wedge \lnot D)
&= (T) \land (C \wedge \lnot D) & \text{by Identity Law}\\
&= [(A \land \neg B) \lor T] \land (C \wedge \lnot D) & \text{by Domination Law}\\
&= [(A \land \neg B) \land (C \wedge \lnot D)] \lor [T \land (C \wedge \lnot D)] & \text{by Distributive Law}\\
&= (A \land \neg B \land C \wedge \lnot D) \lor (C \wedge \lnot D) & \text{by Identity Law} \\
\end{align*}$$
Basically, we are able to introduce an extra term for free that will help us factor the original expression. Returning to the original problem, we obtain:
$$ \begin{align*}
&(A\wedge \lnot B \wedge \lnot C \wedge \lnot D) \vee (C \wedge \lnot D) \\
&= (A\wedge \lnot B \wedge \lnot C \wedge \lnot D) \vee (A \land \neg B \land C \wedge \lnot D) \lor (C \wedge \lnot D) & \text{by substituting above}\\
&= [(A\wedge \lnot B \wedge \lnot D) \land (\neg C \lor C)] \lor (C \wedge \lnot D) & \text{by Distributive Law}\\
&= [(A\wedge \lnot B \wedge \lnot D) \land (T)] \lor (C \wedge \lnot D) & \text{by Inverse Law}\\
&= (A \wedge \lnot B \wedge \lnot D) \vee (C \wedge \lnot D) & \text{by Identity Law} \\
\end{align*}$$
as desired. |
H: Given $\frac {a\cdot y}{b\cdot x} = \frac CD$, find $y$.
That's a pretty easy one... I have the following equality : $\dfrac {a\cdot y}{b\cdot x} = \dfrac CD$ and I want to leave $y$ alone so I move "$b\cdot x$" to the other side
$$a\cdot y= \dfrac {(C\cdot b\cdot x)}{D}$$
and then "$a$"
$$y=\dfrac {\dfrac{(C\cdot b\cdot x)}D} a.$$
Where is my mistake? I should be getting $y= \dfrac {(b\cdot C\cdot x)}{(a\cdot D)}$.
I know that the mistake I am making is something very stupid, but can't work it out. Any help? Cheers!
AI: No mistake was made. Observe that:
$$
y=\dfrac{\left(\dfrac{Cbx}{D}\right)}{a}=\dfrac{Cbx}{D} \div a = \dfrac{Cbx}{D} \times \dfrac{1}{a}=\dfrac{Cbx}{Da}=\dfrac{bCx}{aD}
$$
as desired. |
H: Is every field the field of fractions for some integral domain?
Given an integral domain $R$, one can construct its field of fractions (or quotients) $\operatorname{Quot}(R)$ which is of course a field. Does every field arise in this way? That is:
Given a field $\mathbb{F}$, does there exist an integral domain $R$ such that $\operatorname{Quot}(R) \cong \mathbb{F}$?
AI: Yes. $F$. (You can't hope to do better than this in general; consider the finite fields.)
Here's a cute example, though. It turns out that $\mathbb{C}$ is isomorphic to $\overline{ \mathbb{C}(t) }$. From this it follows that $\mathbb{C}$ is isomorphic to the fraction field of the integral closure of $\mathbb{C}[t]$ in the algebraic closure of its fraction field (the analogue of the algebraic integers in this setting).
Here's some geometry. If your field $F$ is finitely generated over some base field $k$, you can think of it as the function field of some variety $X$. Finding a nice domain whose field of fractions is $F$ can be interpreted geometrically as finding an affine variety birational to $X$, which you can do as follows: $F$ is necessarily a finite extension of $k(x_1, ... x_n)$ for some $n$, so we can take the integral closure of $k[x_1, ... x_n]$ in $F$. |
H: Find the product of the following determinants (involving logarithms with different bases)
Find the product of the following determinants:
$$\begin{vmatrix}
\log_3512 & \log_43 \\
\log_38 & \log_49
\end{vmatrix} * \begin{vmatrix}
\log_23 & \log_83 \\
\log_34 & \log_34
\end{vmatrix}$$
I tried to solve it like this:
$$\begin{vmatrix}
9\log_2 3 & \log_43 \\
3\log_32 & 2\log_43
\end{vmatrix} * \begin{vmatrix}
\log_23 & \log_83 \\
2\log_32 & 2\log_32
\end{vmatrix}$$ $$(9\log_2 3*2\log_43-3\log_32*\log_43)(\log_23*2\log_32-2\log_32*\log_83)$$
How do I get the final answer?
Please offer your assistance
AI: You have
$$(18\log_32\log_43-3\log_32\log_43)(2\log_23\log_32-2\log_32\log_83)\;,$$
which is correct. Clearly
$$18\log_32\log_43-3\log_32\log_43=15\log_32\log_43\;,$$
so we can immediately simplify it to
$$15\log_32\log_43(2\log_23\log_32-2\log_32\log_83)\;.\tag{1}$$
Now if $4^x=3$, then $2^{2x}=3$, so $\log_23=2\log_43$, or $\log_43=\frac12\log_23$. Similarly, you can verify that $\log_83=\frac13\log_23$. Thus, we can further simplify $(1)$ to
$$\frac{15}2\log_32\log_23\left(2\log_23\log_32-\frac23\log_32\log_23\right)\;.\tag{2}$$
This is extremely easy to evaluate if you know something about products of the form $\log_ab\log_ba$. If not, use the fact that in general $\log_bx=\frac{\log_ax}{\log_ab}$. I’ll complete the calculation below but leave it spoiler-protected; mouse-over to see it.
A useful general fact is that $\log_ab\log_ba=1$; this follows easily from the fact that in general $\log_bx=\frac{\log_ax}{\log_ab}$. Thus, $\log_23\log_32=1$, and $(2)$ is simply $\dfrac{15}2\left(2-\dfrac23\right)=\dfrac{15}2\cdot\dfrac43=10$. |
H: A choice question on determinants
If $A$ and $B$ are square matrices of order $2$, then $\det(A+B)=0$ is possible only when:
$(a)$ $\det(A)=0$ or $\det(B)=0$$(b)$ $\det(A)+\det(B)=0$ $(c)$ $\det(A)=0$ and $\det(B)=0$ $(d)$ $A+B=0$
I was sure that when $A+B=0$, $\det(A+B)=\det(0)=0$ So the answer is $d$.
But I am not able to show that the other three does not meet the condition.
Please offer your assistance?
Thank you :)
AI: Counterexamples,
case (a),(b),(c) take $A=I,B=-I$
Then $(A+B)=0\Rightarrow \det(A+B)=0$ but $\det(A)=\det(B)=1$ |
H: Does this inequality have any solutions for composite $n \in \mathbb{N}$?
Does this inequality have any solutions for composite $n \in \mathbb{N}$?
$$\sqrt{2} < \frac{\sigma_1(n^2)}{n^2} < \frac{4n^2}{(n + 1)^2}$$
Note that $\sigma_1$ is the sum-of-divisors function.
AI: Note that,
$\displaystyle f(x)=\frac{4x^2}{(x+1)^2}$ is an increasing function for +ve x(i.e. $x>0$)
$\displaystyle\lim_{x\to \infty }\frac{4x^2}{(x+1)^2}=4$
So $\exists m\in N$ such that $\displaystyle\frac{4x^2}{(x+1)^2}>3$, $\forall x\ge m$
now we will choose an $n=2^a3^b$ with $a,b\ge 1$ with $n\ge m$(Note that one such n exists)
Then we have $\displaystyle\frac{\sigma_{1}(n^2)}{n^2}\le \prod_{i=1}^{k}(\frac{p_i}{p_i-1})=\frac{3}{2}\frac{2}{1}=3<\frac{4n^2}{(n+1)^2}$(The proof of the fact is stated in my observation.)
And we have $n^2=2^{2a}3^{2b}$ as $2a\ge 2$
So $\displaystyle\frac{\sigma_{1}(n^2)}{n^2}=(\sum_{j=0}^{2a}2^{-j})(\sum_{j=0}^{2b}3^{-j})>(1+\frac{1}{2})1=1.5>\sqrt{2}$
Hence there exists one such composite $n$ satisfying the condition of the question.
I have made some obsevations (which might help others in analysing a bit more)which i am stating,
Let $\displaystyle n=\prod_{i=1}^{k}p_i^{\alpha_{i}}$ then we have,
$\displaystyle\sigma_{1}(n^2)=\prod_{i=1}^{k}(\sum_{j=0}^{2\alpha_i}p_{i}^{j})$
Then $\displaystyle\frac{\sigma_{1}(n^2)}{n^2}=\frac{\prod_{i=1}^{k}(\sum_{j=0}^{2\alpha_i}p_{i}^{j})}{\prod_{i=1}^{k}p_i^{2\alpha_{i}}}=\prod_{i=1}^{k}(\sum_{j=0}^{2\alpha_i}p_{i}^{-j})$
And by observing that $p_i^{-j}\le1(\forall 1\le j\le2\alpha_i)$
We have,
$\displaystyle\frac{\sigma_{1}(n^2)}{n^2}=\prod_{i=1}^{k}(\sum_{j=0}^{2\alpha_i}p_{i}^{-j})\le \prod_{i=1}^{k}(2\alpha_i+1)=$no. of divisors of $n^2$
we also have,
$\displaystyle\frac{\sigma_{1}(n^2)}{n^2}=\prod_{i=1}^{k}(\sum_{j=0}^{2\alpha_i}p_{i}^{-j})\le\prod_{i=1}^{k}(\sum_{j=0}^{\infty}p_{i}^{-j})\le \prod_{i=1}^{k}(\frac{1}{1-\frac{1}{p_i}})=\prod_{i=1}^{k}(\frac{p_i}{p_i-1})$
Please feel free to point out mistakes (if there are any). |
H: what does this really mean
what exactly does this expression mean, i keep seeing it in statistics but i never really understood what its supposed to be, is it another way of writing the variance
$$\Sigma_{i=1}^{n}(Y_i-\bar{Y})^2$$
btw the variance expression i am familiar with is $Var(X)= E(X^2)- [E(X)]^2$, would the expression above be a different way of writing this expression ^, if not what does it mean then, or at least please explain what the variables in the expression are supposed to be
AI: The version:
$$E(X^2)- [E(X)]^2$$
is the vector form of the version:
$$\Sigma_{i=1}^{n}(Y_i-\bar{Y})^2$$
The second operates at the level of elements $i$ of the first version's vectors. |
H: How to prove that the implicit function theorem implies the inverse function theorem?
I can prove the converse of it, but I cannot do this one. Here is the problem:
Prove that the implicit function theorem implies the inverse function theorem.
AI: For $f : \mathbb{R}^n \to \mathbb{R}^n$, consider $F : \mathbb{R}^n\times\mathbb{R}^n \to \mathbb{R}^n$ given by $F({\bf x}, {\bf y}) = f({\bf y}) - {\bf x}$. |
H: $X(G)=4$ then G contains $K_4$
This is a practice question in my text.
Its a true and false question and I have to prove it its true $X(G)=4$ then G contains $K_4$ where $X(G)$ is the chromatic number. I know this is true but how do I prove this. I have found a answer online http://book.huihoo.com/pdf/graph-theory-With-applications/pdf/chapter8.pdf
but the proof here is to complicated for me. Please note this is for a first course in Graph theory.
AI: The reference you found shows that a 4-chromatic graph contains a subdivision of $K_4$, not that it has $K_4$ as a subgraph.
What's the chromatic number of
Does it contain a $K_4$? |
H: How to simplify $\frac{1-\frac{1-x}{1-2x}}{1-2\frac{1-x}{1-2x}}$?
$$\frac{1-\frac{1-x}{1-2x}}{1-2\frac{1-x}{1-2x}}$$
I have been staring at it for ages and know that it simplifies to $x$, but have been unable to make any significant progress.
I have tried doing $(\frac{1-x}{1-2x})(\frac{1+2x}{1+2x})$ but that doesn't help as it leaves $\frac{1+x-2x^2}{1-4x^2}$ any ideas?
AI: $$\frac{1-\frac{1-x}{1-2x}}{1-2\frac{1-x}{1-2x}}$$
(take L.C.M. in numerator and denominator)
$$\dfrac{\dfrac{{(1-2x)}-{(1-x)}}{1-2x}}{\dfrac{(1-2x)-2{(1-x)}}{1-2x}}$$
$$\dfrac{\dfrac{1-2x-1+x}{1-2x}}{\dfrac{1-2x-2+2x}{1-2x}}$$
$$\dfrac{\dfrac{-x}{1-2x}}{\dfrac{-1}{1-2x}}$$
since $$\dfrac{\dfrac ab}{\dfrac cd}=\dfrac ab\times\dfrac {d}{c}$$
so
$${\dfrac{-x}{1-2x}}\times{\dfrac{1-2x}{-1}}$$
$$x$$ |
H: Does adding "monotone" change the meaning?
I wonder why math texts states "function is monotone increasing/decreasing" instead of "function is increasing/decreasing" without word "monotone". Nothing changes, right? Then, why?
AI: Saying monotone increasing/decreasing seems to me very imprecise. I'll discuss a different version of this, namely: monotone (increasing or decreasing).
Mentioning it is monotone is sufficient. Adding (increasing or decreasing) is redundant. It's common practice in introductory books with the intent of making the reader rememeber what monotone means instead of (the reader) just skipping through the word.
I think a similar issue is this: given a differentiable function $f$, consider it's derivative, $f'$.
The first time I read something like that I was really confused. The way it is put, it looks like the author is singling out the usual derivative $f'$ from a set containing a bunch of possible derivatives of $f$, but of course, there's only one derivative of$f$. |
H: Multiplication of rings is an abelian group homomorphism
Let $R$ be a ring without identity.
Suppose that the multiplication $ \cdot : R \times R \rightarrow R $ is an abelian group homomorphism.
For $a, b \in R$ what can we conclude about the product of $a \cdot b$ ?
AI: Let $m\colon R\times R\to R$ be the multiplication of $R$ and suppose $m$ is an abelian group homomorphism on the addition groups of $R\times R$ and $R$. Then, for any $a,b\in R$:
$$ab = m(a,b) = m((a,0)+(0,b)) = m(a,0)+m(0,b) = 0+0 = 0.$$ |
H: Cauchy principal value and the "normal" definition.
Suppose that $\int^{\infty}_{-\infty}f(x)\, dx$ exist. How to prove that $\lim_{b\to\infty}\int^{b}_{-b}f(x)\, dx$ also exist, and $\int^{\infty}_{-\infty}f(x)\, dx=\lim_{b\to\infty}\int^{b}_{-b}f(x)\, dx$
AI: $\int_{-\infty}^{\infty}f(x)\, dx$ exist mean that both
$$
\int_{-\infty}^{0}f(x)\, dx,\,\int_{0}^{\infty}f(x)\, dx
$$
exist
This means that the two limits
$$
\lim_{b\to\infty}\int_{-b}^{0}\, f(x)\, dx,\,\lim_{b\to\infty}\int_{0}^{b}\, f(x)\, dx
$$
exist.
Since the two limits exists you can add them up and get the desired
result |
H: morphisms on topological spaces
In the category of topological spaces:
1.) Show that a morphism is monic IFF it is injective
2.) Show that a morphism is epic IFF it is surjective
3.) Are there any morphisms that are monic and epic but not invertible? Prove.
4.) Show that every idempotent splits.
NOTE: A morphism $f: A \rightarrow A $ is idempotent if $ f \circ f = f$. An idempotent splits if there exits $g$ and $h$ such that $f = hg$ and $gh = 1_{A}$
I understand that in general category theory, a monomorphism can be defined as:
$f: X \rightarrow Y$ such that for all $g_{1}, g_{2} : Z \rightarrow X$
$f(g_{1})=f(g_{2}) \Rightarrow g_{1}=g_{2}$
can I then, just say, therefore $f$ is injective, and then the reverse is simple.
Or have I over simplified the problem?
I would go the same route for 2.) I don't know how to approach 3.) and 4.)
AI: 1.) Assume that $f$ is not injective. Can you find two distinct maps $g_1,g_2$ such that $f\circ g_1=f\circ g_2$? This would prove that $f$ is not a monic.
2.) Now assume that $f:X\to Y$ is not surjective. Here you have to find two distinct maps $g,h:Y\to Z$ into a space $Z$ of your choice such that $g$ and $h$ coincide on $f[X]$. HINT: A constant map is always continuous.
3.) Once you have shown that the monics are the injections and the epics are the surjection, you just have to find a continuous bijection that is not a homeomorphism. |
H: Who introduced the term Homeomorphism?
Who introduced the term Homeomorphism?
I was wondering about asking this question on english.stackexchange but I think this term is strongly (and maybe solely) related to mathematics.
AI: An excerpt from Gregory H. Moore's The evolution of the concept of homeomorphism:
The evolution of the concept of “homeomorphism” was essentially complete by $1935$ when Pavel Aleksandrov (Paul Alexandroff) at the University of Moscow and Heinz Hopf at the Eidgenossische Technische Hochschule in Zurich published their justly famous book Topologie, aiming to unify the two major branches of topology, the algebraic and the set-theoretic. They took as their fundamental undefined concept “topological space,” based on the closure axioms of Kazimierz Kuratowski [$1922$]. And they defined a homeomorphism between topological spaces in the way that is now standard: “A one–one continuous mapping $f$ of a space $X$ into a space $Y$ is called a topological mapping or a homeomorphism (between $X$ and $f(X)=Y′⊆Y$) if the inverse of $f$ is a continuous mapping of $Y′$ to $X$. Two spaces… are called homeomorphic if they can each be mapped topologically onto each other” [Aleksandrov and Hopf, $1935$, $52$]. |
H: Limit calculation using Riemann integral
My task is to calculate limit: $$\lim_{n \rightarrow \infty} \sqrt[n^2]{ \frac{(n+1)^{n+1}(n+2)^{n+2}\cdots(n+n)^{n+n}}{n^{n+1}n^{n+2}\cdots n^{n+n}} }$$I denoted that limit as $a_n$. So:
$$\log a_n=\frac{1}{n^2} \left ( (n+1)\log \left (1+\frac{1}{n} \right )+\cdots+(n+n)\log \left (1+\frac{n}{n} \right )\right )=$$$$=\cdots=\frac{1}{n} \left ( \sum_{k=1}^n\log(1+\frac{k}{n})+\sum_{k=1}^n\frac{k}{n}\log(1+\frac{k}{n}) \right )$$ The only (quite crucial however) problem I've got is the term $\frac{1}{n}$ before the above. I know how to calculate parenthesis:$$ \lim_{n \rightarrow \infty}\left ( \sum_{k=1}^n\log(1+\frac{k}{n})+\sum_{k=1}^n\frac{k}{n}\log(1+\frac{k}{n}) \right )=$$$$\int_0^1\log(1+x)dx+\int_0^1x\log(1+x)dx=\cdots$$But I have no idea what $\frac{1}{n} $does. Any hints? Thanks in advance.
AI: We will use the fact that $\int_0^1f(x)dx=\lim_{n \rightarrow \infty} \frac{1}{n}\sum_{k=1}^nf(\frac{k}{n})$ So:$$\lim_{n \rightarrow \infty} \frac{1}{n}\left ( \sum_{k=1}^n\log(1+\frac{k}{n})+\sum_{k=1}^n\frac{k}{n}\log(1+\frac{k}{n}) \right )=$$$$\int_0^1\log(1+x)dx+\int_0^1x\log(1+x)dx=
\cdots=\log4-\frac{3}{4}$$
Going back to the original limit:$$\lim_{n \rightarrow \infty}a_n=\exp(\log4-\frac{3}{4})$$ |
H: Proving derived sets are closed
I am following a proof of the statement
The derived set(the set of accumulation points) $A'$ of an arbitrary
subset $A$ of $\mathbb{R}^2$ is closed.
in a book.
It starts with
Let $q$ be a limit point of $A'$. If it is proved that q $\in A'$, then the proof is done.
Let $G_q$ be the open set containing $q$. Since $q$ is a limit point of $A'$,$G_q$ contains at least one point $r\in A'$ different from $q$. But $G_q$ is an open set containing $r\in A'$; (Up to this I understood) hence $G_q$ contains infinitely many points of $A$ (How? I did not get this.)
So there exist $a \in A$ such that $a \neq q,a \neq r$ and $a \in G_q$.
That is,each open set containing $q$ contains infinitely many points of $A$. Hence $q \in A'$.
Can you help me out.
AI: In $\Bbb R^2$, the points are closed. So, $G_q\setminus\{q\}$ is an open neighborhood of $r\in A'$, so it contains some $a_1\in A$ such that $a_1\ne r$. But, going on, $G_q\setminus\{q,a_1\}$ is also an open neighborhood of $r$, so it contains an $a_2\in A$. And so on. |
H: Goldberg polyhedra coordinates
I would 3D-print some Goldberg Polyhedra importing in Sketchup, the coordinates provided on these links:
72 faces (2,1) - (coordinates)
132 faces (3,1) - (coordinates)
192 faces (3,2) - (coordinates)
252 faces (5,0) - (coordinates)
I noticed that they have pretty much the same volume, but I need that they have (more or less) the same edge length.
Could you help me understanding how can I calculate the constants (C0, C1, C2 ...) to reach my purpose?
AI: I ran the coordinates through a little perl script to get seom edge length statistics as listed in the following table.
If you scale the given polyhedra with "scale factor", the average edge length will become one unit. Notice that the bounding box will grow to an approximate cube of twice the scale factor units side length.
$$\begin{matrix}\text{file} & \text{average edge}&\text{std. deviation}&\text{scale factor}\\
\text{DualGeodesicIcosahedron3.txt}&0.265145379472169&0.0124547083541117&3.77151584534764\\
\text{DualGeodesicIcosahedron6.txt}&0.19350606185143&0.0112490031974374&5.16779676270699\\
\text{DualGeodesicIcosahedron8.txt}&0.159756530507094&0.00993562997968005&6.25952502114204\\
\text{DualGeodesicIcosahedron10.txt}&0.139135767498588&0.00896851769224665&7.18722452161808\\
\end{matrix} $$ |
H: Largest domain on which $z^{i}$ is analytic.
Can anyone help me with this question:
What is the largest domain $D$ on which the function $f(z)=z^{i}$ is analytic?
AI: Rewriting as $f(z)=\exp(i\ln(z))$ suggests that $D$ contains at least the slit plane $\mathbb C\setminus(-\infty,0]$. Since the principle value of $\ln z$ jumps by $\pm2\pi i$ at the slit, we see that $f(z)$ jumps by a factor of $ e^{\mp2\pi}$, hence we cannot make $D$ any larger. (Of course there is not "the" largest domain, but only "a" largest domain depending on how we slit from $0$ to $\infty$) |
H: A strictly positive operator is invertible
Suppose that $H$ is an Hilbert space, and $T: H \to H$ is a self-adjoint strictly positive operator (i.e. $\langle Tx,x\rangle > 0$ for all $x \neq 0$). How do I show that this operator is invertible? For example, I want to show that $\langle Tx , x\rangle$ is bounded below by some positive constant (and then I am happy, I know the rest).
Thank you,
Sasha
AI: Are you sure it is true?
Consider $T:\ell_2\to\ell_2$ which maps $(a_n)$ to $(a_n/n)$. This is clearly self-adjoint, and positive:
$$\langle Ta,a\rangle=\sum_{n\ge 1} \frac{|a_n|^2}n$$
and this is $>0$ whenever any $a_n\ne 0$.
On the other side, $\langle Tx,x\rangle$ is not bounded from below: for the sequence $x_n=1$ and $x_k=0$ if $k\ne n$, we have $\langle Tx,x\rangle=1/n$. |
H: Show that "In a UFD, if $p$ is irreducible and $p\mid a$, then $p$ appears in every factorization of $a$" is false
This statement below is false, but I cannot find any counterexamples or explain why. When I tried to give some reasoning, I ended up showing the statement is true.
In a UFD, if $p$ is irreducible and $p\mid a$, then $p$ appears in every factorization of $a$.
(Here UFD is the abbreviation for Unique Factorization Domain.)
So, my explanation goes like this: since $a$ is in a UFD, there is a unique factorization of it, and the factorisation consists of irreducibles, therefore if $p\mid a$ then it is clear that $p$ appears in every factorization of $a$ since the factorization is unique.
Could anyone kindly show me where did I go wrong?
Thanks!
AI: Remember that the factorizations are unique only up to a unit multiple. Thus, even in $\mathbb Z$--or any UFD in which there are nonidentity units and irreducibles--we can find an example of this phenomenon. E.g. $2\mid 4$ but also $4=(-2)\cdot (-2)$. |
H: Prove that $T$ is an orthogonal projection
Let $T$ be a linear operator on a finite-dimensional inner product space $V$. Suppose that $T$ is a projection such that $\|T(x)\| \le \|x\|$ for $x \in V$. Prove that $T$ is an orthogonal projection.
I can't understand well. The definition of orthogonal operator is $\|T(x)\| = \|x\|$. But why that $\|T(x)\| \le \|x\|$ for $x \in V$ means orthogonal projection?
AI: Note that $\|Tx\|=\|x\|$ is the definition of an isometry. Over a finite-dimensional real inner product space, this is equivalent to the matrix of $T$ in an orthonormal basis being "orthogonal", i.e. $A^TA=AA^T=I_n$.
Orthogonal projection means $T^2=T$ and $T^*=T$, i.e. self-adjoint idempotent. The first thing you should remark is that this is equivalent to $T$ being idempotent with $\ker T \perp \mbox{im} T$. Indeed, if $T$ is idempotent, then $T^*$ is idempotent with $\ker T^*=(\mbox{im} T)^\perp$ and $\mbox{im} T^*=(\ker T)^\perp$.
Claim: if $T$ is idempotent and $\|T\|\leq 1$ (i.e. $\|Tx\|\leq \|x\|$ for all $x\in V$), then $T$ is self-adjoint (i.e. $T$ is an orthogonal projection).
Remark: the converse is true since then the orthogonal direct sum $V=\ker T\oplus \mbox{im} T$ yields $\|x+y\|^2=\|x\|^2+\|y\|^2\geq \|y\|^2=\|0+Ty\|^2=\|T(x+y)\|^2$ for all $x\in \ker T$ and all $y\in \mbox{im} T$. Note that all this holds on a general inner product space. No need to assume finite dimension.
Proof: we need to prove that $(x,y)=0$ for every $x\in \ker T$ and every $y\in\mbox{im} T$. Let us take two such vectors, which are characterized by $Tx=0$ and $Ty=y$. Then for every $t\in\mathbb{R}$
$$
t^2\|y\|^2=\|ty\|^2=\|T(x+ty)\|^2\leq \|x+ty\|^2=\|x\|^2+2t\,\mbox{Re}(x,y)+t^2\|y\|^2
$$
whence
$$
\mbox{Re}(x,y)\geq -\frac{\|x\|^2}{2t}\;\forall t>0\qquad\mbox{and}\qquad \mbox{Re}(x,y)\leq -\frac{\|x\|^2}{2t}\;\forall t<0
$$
which implies $\mbox{Re}(x,y)=0$ by letting $t$ tend to $\pm \infty$. In the real case, we are done since $\mbox{Re}(x,y)=(x,y)=0$. In the complex case, take $e^{i\theta}$ such that $|(x,y)|=e^{i\theta}\mbox{Re}(x,y)=\mbox{Re}(x,e^{i\theta}y)$ and apply the above to $x,e^{i\theta}y$ to conclude that $|(x,y)|=0$ whence $(x,y)=0$. QED. |
H: Why don't we include $\pm\infty$ in $\mathbb R$?
Why don't we include $\pm\infty$ in $\mathbb R$?
If we do so, many equations will got real solution (e.g. $2^x=0$), and $\mathbb R$ will be much more complete. Why don't we do so?
Thank you.
AI: A large reason why we don't include $\infty$ is because we can't really do arithmetic with it. $\mathbb{R}$ is a field, meaning that is satisfies a list of axioms that give it a certain structure. I suggest you look up these axioms if you're not familiar with them and see just how many of them start to fail if you try to include an $\infty$ in $\mathbb{R}$.
Yes, including $\infty$ may give you a few extra solutions to some problems, but it won't solve every problem ($x^2=-2$ for example) and in some cases, it will introduce answers you probably don't want (would $x=x+2$ now have the solution $x=\infty$?). All of this isn't really worth all of the issues you get for including $\infty$ in $\mathbb{R}$. |
H: How to get out of this indetermination $\lim\limits_{x\to -1} \frac{\ln(2+x)}{x+1}$?
Well, is just that, I can't remember a way to get out of this indetermination (with logarithm), can someone help me?
I'm studying for my calculus test and this question is taking me some time.
$$\lim\limits_{x\to -1} \frac{\ln(2+x)}{x+1}$$
AI: There are at least three methods:
$(1):$ Utilize $$\lim_{h\to0}\frac{\ln(1+h)}h=1$$ putting $h=1+x$ (as I have suggested in the comment)
$(2):$ Use Maclaurin series of $\ln(1+y)$ by putting $y=1+x$
$(3):$ Use L'Hospital's Rule as $\lim_{x\to-1}\frac{\ln(2+x)}{x+1}$ is of the form $\frac00$ |
H: Order of the group generated by two matrices
I need to find the order of the group generated by the matrices
$$\begin{pmatrix}0&1\\-1&0\end{pmatrix},\begin{pmatrix}0&i\\-i&0\end{pmatrix}$$
under multiplication.
$\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}0&i\\-i&0\end{pmatrix}=\begin{pmatrix}-i&0\\0&-i\end{pmatrix}$
and $\begin{pmatrix}-i&0\\0&-i\end{pmatrix}^4=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ so $4$ is the order? am I right?
AI: No. This group contains, e.g., $\begin{pmatrix}0&1\\-1&0\end{pmatrix}^2=\begin{pmatrix}-1&0\\0&-1\end{pmatrix}$.
Addendum: Let $A=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$ and $B=\begin{pmatrix}0&i\\-i&0\end{pmatrix}$. We have $A^4=I, B^2=I, AB=BA$. Hence the group generated by them is $\langle A\rangle \times \langle B \rangle $ and has order 8. |
H: Extremal of a function -Euler equation
I have to calculate $J(t+h)-J(t)$ where $J(x)=\int_0^1 x'^3 dt$, $x=x(t)$, $h\in C^1[0,1]$, $h(0)=h(1)=0$
I have solution, I will write it below, and I will write my question.
$J(t+h)-J(t)
=\int_0^1 (t+h)'^3 dt - \int_0^1 t'^3 dt
=\int_0^1 (1+h')^3 dt - \int_0^1 dt
=\int_0^1 (1+3h'+3h'^2+h'^3-1) dt
=3\int_0^1 (h'+h'^2+\frac{1}{3} h'^3) dt
=3\int_0^1 h'^2(1+\frac{1}{3} h') dt$
My question is what happened with $h'$ from the integral before the last one?
Can you explain that to me? It appears in a lot of these kind of problems and I don't know why is that.
AI: $$
3\int_0^1 h' dt = 3(h(1)-h(0)) = 0
$$ |
H: Understanding the proof for: $d(f^*\omega)\overset{!}{=}f^*(d\omega)$
Consider this
Proposition: Let $U\subset\mathbb{R}^n$ and $V\subset\mathbb{R}^n$ be open sets and $\phi:U\to V$ be differentiable. For all $k\in\mathbb{N}_0$ and $\omega\in \Lambda^k(V)$ it is true that
$$d(\phi^*\omega)=\phi^*(d\omega)$$
I am trying to understand its prove. But there are some steps I do not understand. Here are the first lines of the
Proof: At first let $f\in \mathcal{C}^\infty (V)$ be a differentialform of degree $0$. Then $\phi^*(f)=f\circ\phi$. Hence
\begin{eqnarray*}
d(\phi^*(f))
&=&d(f\circ\phi)\\
&=&\sum_{j=1}^{n}\frac{\partial(f\circ\phi)}{\partial x_j}dx_j\\
&\overset{?}{=}&\sum_{i,j}\frac{\partial f}{\partial y_i}\circ\phi(\frac{\partial (\phi_i)}{\partial x_j}dx_j)\\
&=&\sum_{i=1}^{m}\frac{\partial f\circ\phi}{\partial y_i}d\phi_i\\
\end{eqnarray*}
I marked the position I don't understand with a question mark. What exactly happens here?
AI: As mentioned by Daniel, this is just the chain rule. As for your comment, I think what is written in your post is not correct. I claim it should be $$\sum_{i,j}\left(\frac{\partial f}{\partial y_i}\circ\phi\right)\cdot\left(\frac{\partial\phi_i}{\partial x_j}\right)dx_j$$ where $\cdot$ denotes multiplication of functions. Now you can see how it is just the chain rule. For two differentiable functions $f, g : \mathbb{R} \to \mathbb{R}$, the composition $f \circ g :\mathbb{R} \to \mathbb{R}$ is differentiable and $$(f\circ g)'(a) = f'(g(a))\cdot g'(a)$$ where $\cdot$ denotes multiplication of real numbers. Without the value $a$, this is just the statement that $$(f\circ g)' = (f'\circ g)\cdot g'$$ where $\cdot$ denotes multiplication of functions. |
H: The value of the $\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}\frac{\sqrt[k]{k!}}{k}$
What is the value of $$\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}\frac{\sqrt[k]{k!}}{k}?$$
AI: Let $x_k=\dfrac{k!}{k^k}$
Then $$\lim_{k\to\infty}\frac{x_{k+1}}{x_k}=\lim_{k\to\infty}\frac{(k+1)!k^k}{k!(k+1)(k+1)^k}=\lim_{k\to\infty}\left(1+\frac 1k \right)^{-k}\to e^{-1}$$
This implies${}^{(1)}$ that $$\lim_{k\to\infty} x_k^{1/k}=e^{-1}$$
Then, since $x_k^{1/k}=a_k$ converges${}^{(2)}$ and has value $e^{-1}$ $$\sigma_n=\frac 1 n\sum_{k=1}^n a_k$$ also converges, and has value $$e^{-1}$$
$(1)$: Follows from $$\liminf \frac{x_{n+1}}{x_n}\leq \liminf x_n^{1/n}\leq \limsup x_n^{1/n}\leq \limsup \frac{x_{n+1}}{x_n}$$
$(2)$: Follows from $$\liminf {x_n}\leq \liminf \sigma_n \leq \limsup \sigma_n \leq \limsup {x_n}$$
where $\displaystyle \sigma_n:=\frac 1 n \sum_{k=1}^n x_k$. |
H: The angle at which a circle and a hyperbola intersect?
$x^2 - 2y^2 = 4$
$ (x-3)^2 + y^2 = 25 $
How do you calculate the angle at which a circle and a hyperbola intersect?
If I express $y^2$ from the first equation and apply it to the second equation, I get the following:
$y^2 = -2 + \frac{x^2}{2}$
$(x-3)^2 + -2 + \frac{x^2}{2} = 25$ ... $x^2 - 4x - 12 = 0$
$x_1 = 6, x_2 = -2 \implies y_1 = 4, y_{1'} = -4, y_2 = 0$
Now, for the points $(6,4)$ I could calculate the line equation which intersects the circle and the hyperbola: $(6-3)(x-3) + (4-0)(y-0) = 25 \implies y = -\frac{3x}{4} + \frac{17}{2}$
I calculated this because I thought I could apply the formula $\tan\phi = |\frac{k_2 - k_1}{1 + k_1k_2}|$
AI: HINT:
$(1):$ From the article $151$ of The elements of coordinate geometry (Loney), the gradient $(m_1)$ of $x^2+y^2+2gx+2fy+c=0$ is $=-\frac{x_1+g}{y_1+f}$ where $(x_1,y_1)$ is the given point on the circle
and from the Article $305,262$ the gradient $(m_2)$ of $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$ is $\frac{b^2\cdot x_2}{a^2\cdot y_2}$ where $(x_2,y_2)$ is the given point on the hyperbola
$(2):$ Alternatively, find the gradients of each curve at $(6,4)$ and at $(-2,0)$ applying first order derivative
The acute angle between the curves will be $$\arctan\left|\frac{m_1-m_2}{1+m_1m_2} \right|$$ |
H: Is $S=\sum_{r=1}^\infty \tan^{-1}\frac{2r}{2+r^2+r^4}$ finite?
Problem:
If $$S=\sum_{r=1}^\infty \tan^{-1}\left(\frac{2r}{2+r^2+r^4}\right)$$ Then find S ??
Solution:
I know that $\tan^{-1} x + \tan^{-1} y= \tan^{-1} \frac {x +y} {1-xy} $
But I have no idea how to such complicated question with it.
AI: HINT:
As $1+a^2+a^4=(1+a^2)^2-a^2=(1+a^2-a)(1+a^2+a),$
$$\frac{2\cdot a}{2+a^2+a^4}=\frac{(1+a^2+a)-(1+a^2-a)}{1+(1+a^2-a)(1+a^2+a)}$$
$$\implies \arctan \left(\frac{2\cdot a}{2+a^2+a^4}\right)=\arctan(1+a^2+a)-\arctan(1+a^2-a)$$
Can you recognize the Telescoping series?
So, $$\sum_{1\le r\le n}\arctan \left(\frac{2\cdot r}{2+r^2+r^4}\right)=\arctan (1+n^2+n)-\arctan 1$$
$$=\arctan\left(\frac{n^2+n}{n^2+n+2}\right)=\arctan\left(\frac{1+\frac1n}{1+\frac1n+\frac2{n^2}}\right)$$
$$\implies \lim_{n\to\infty}\arctan\left(\frac{1+\frac1n}{1+\frac1n+\frac2{n^2}}\right)=\arctan1=\frac\pi4$$ |
H: Equivalent conditions for a measurable function
I am reading Stein and Shakarchi volume 3 and on page 28 they give the definition of a Lebesgue measurable (real - valued) function $f: \Bbb{R}^d \to \Bbb{R}$ to be on in which for any $a \in \Bbb{R}$, $f^{-1}(-\infty,a)$ is Lebesgue measurable. Now I am wondering it this is equivalent to the following conditions:
$f$ is a Lebesgue measurable function if for any Lebesgue measurable set $E$, $f^{-1}(E)$ is Lebesgue measurable
For almost every $a \in \Bbb{R}$, $f^{-1}\left((-\infty,a)\right)$ is Lebesgue measurable.
My question is: Are these equivalent to the definition given in Stein and Shakarchi?
AI: (2) is equivalent to the Stein-Shakarchi definition (which let's call (0)). (1) is strictly weaker.
(0) is actually equivalent to the following statement: (3) there is a dense set $E \subset \mathbb{R}$ such that for every $a \in \mathbb{R}$, $f^{-1}((-\infty, a))$ is Lebesgue measurable. To see (3) implies (0), note that for any $a \in \mathbb{R}$ we can find a sequence $a_n \in E$ with $a_n \uparrow a$. Now observe that
$$f^{-1}((-\infty, a)) = f^{-1}\left(\bigcup_n (-\infty, a_n) \right) = \bigcup_n f^{-1}((-\infty, a_n))$$
So $f^{-1}((-\infty, a))$ is a countable union of Lebesgue measurable sets, hence Lebesgue measurable.
Now (2) implies (3) because every set of full Lebesgue measure is dense. (If it were not dense, its complement would contain an interval, but intervals have positive Lebesgue measure.) Obviously (0) implies (2).
Clearly (1) implies (0). However, the converse fails; for instance, every continuous function satisfies (0) (open sets are Lebesgue measurable) but there are continuous functions which do not satisfy (1). For more information on this, see an answer I wrote on MathOverflow. |
H: is there a formula for working out the angles of a triangle to make the sides meet at the top?
I am doing a GCSE maths foundation paper for revision and one question has a triangle with the base side being 9cm and the other 2 sides 7.5cm. Is there a formula for finding the angles of the triangle given the lengths of each side so that the 2 side lengths join together at the top to complete the triangle?
Thanks
AI: HINT:
Use cosine formula of triangles
$$\cos A=\frac{7.5^2+7.5^2-9^2}{2\cdot7.5\cdot 7.5}$$ |
H: How do I solve such logarithm
I understand that
$\log_b n = x \iff b^x = n$
But all examples I see is with values that I naturally know how to calculate (like $2^x = 8, x=3$)
What if I don't? For example, how do I solve for $x$ when:
$$\log_{1.03} 2 = x\quad ?$$
$$\log_{8} 33 = x\quad ?$$
AI: The logarithm $\log_{b} (x)$ can be computed from the logarithms of $x$ and $b$ with respect to a positive base $k$ using the following formula:
$$\log_{b} (x) = \frac{\log_{k} (x)}{\log_{k} (b)}.$$
So your examples can be solved in the following way with a calculator:
$$x = \log_{1.03} (2) = \frac{\log_{10} (2)}{\log_{10} (1.03)} =
\frac{0.301}{0.013} = 23.450, $$
$$x = \log_{8} (33) = \frac{\log_{10} (33)}{\log_{10} (8)} =
\frac{1.519}{0.903} = 1.681.$$
If you know that $b$ and $x$ are both powers of some $k$, then you can evaluate the logarithm without a calculator by the power identity of logarithms, e.g.,
$$x = \log_{81} (27) = \frac{\log_{3} (27)}{\log_{3} (81)} =
\frac{\log_{3} (3^3)}{\log_{3} (3^4)} = \frac{3 \cdot \log_{3} (3)}{4 \cdot \log_{3} (3)} =
\frac{3}{4}.$$ |
H: How to express the powers of two by a Diophantine equation?
Let $P_2:=\{2^k | k\in\mathbb N\}$ be the set of powers of two. I would like to "see" a polynomial $p(z_1,\ldots,z_r)$ with integer coefficients for which $P_2 =\{n\in\mathbb N | n=p(z_1,\ldots,z_r)\text{ with }z_1,\ldots,z_r\in\mathbb Z\}$. This should be possible, because $P_2$ is clearly recursively enumerable. If it should be easier to present such a polynomial for the set of powers of another prime number, or for the set of Fibonacci numbers, this would be sufficient for me as well.
AI: For the Fibonacci numbers, full details are given in this paper by Jones. |
H: Limit of sequence equals limit of convergent subsequences
Let $\{a_n\}$ be a bounded sequence. Prove that if every convergent subsequence of $\{a_n\}$ has limit $L$, then $\lim_{n\rightarrow\infty}a_n = L$.
I know that if the sequence has a limit, it must be $L$, because the limit must equal the limit of the subsequence. But how can I prove the sequence has a limit?
AI: Suppose $(a_i)$ does not converge to $L$. Then for some $\epsilon$, for any $N$ there exists $n>N$ such that $|a_n-L|>\epsilon$. So I can find a subsequence $b_1,b_2,\ldots$ such that $|b_i-L|>\epsilon$ for all $i$. Since $(b_i)$ is bounded, by Bolzano-Weierstrass I can find a convergent subsequence $(c_i)$ of it. Clearly $(c_i)$ cannot have limit $L$, a contradiction. |
H: Is there some nomenclature to get the remainder of a value?
I want to write a formula where I can say that I have to get the remainder of a division by 4.
$y = \mathbf{remainder}(x\div4)$
Is there any math nomenclature I can use?
AI: You can say $y=x\pmod 4$
See modular arithmetic. |
H: Limit of $\cos(x)/x$ as $x$ approaches $0$
As the title says, I want to show that the limit of
$$\lim_{x\to 0} \frac{\cos(x)}{x}$$
doesn't exist.
Now for that I'd like to show in a formally correct way that
$$\lim_{x\to 0^+} \frac{\cos(x)}{x} = +\infty$$
I'm sure this is right since $\displaystyle\lim_{x\to 0^+} \cos(x) = 1$ and $\displaystyle\lim_{x\to 0^+} x = 0$, but since $\displaystyle\lim_{x\to 0^+} x = 0$ I can't just say:
$$\lim_{x\to 0^+} \frac{\cos(x)}{x} = \frac{1}{\displaystyle\lim_{x\to 0^+} x}$$
Things I tried: expand the fraction and use L'Hospital's rule (This didn't seem to yield results even though I tried quite a few things, should it? If it does, should it always work? It worked for similar problems) and going by the definition of the limit which didn't work for me either.
I'm grateful for any hints for doing this problem and similar problems especially.
AI: Hint: You can find a region near $x=0$ where $\cos x \gt \frac 12$ |
H: Is the set $\{(x, y) : 3x^2 − 2y^ 2 + 3y = 1\}$ connected?
Is the set $\{(x, y)\in\mathbb{R}^2 : 3x^2 − 2y^ 2 + 3y = 1\}$ connected?
I have checked that it is an hyperbola, hence disconnected am i right?
AI: Assuming, you mean
$$S=\{(x,y)\in\mathbb R^2:3x^2-2y^2+3y=1\}, $$
observe that $f\colon(x,y)\mapsto y-\frac23$ is a continuos map. There is no point $(x,y)\in S$ with $f(x,y)=0$ because $3x^2-2\cdot \frac49+3\cdot \frac23=1$ (i.e. $3x^2=-\frac19$) has no real solution. On the other hand, $(0,1)\in S$ and $(0,\frac12)\in S$, so that $f$ does take both positive ans negative values. Therefore we can write $S=f^{-1}((-\infty,0))\cup f^{-1}((0,\infty))$ as disjoint union of nonempty open sets, i.e. $S$ is not connected. |
H: Natural transformations in $\textbf{Set}$
I am trying to understand the concept of a natural transformation by considering the following example, an exercise from Mac Lane's Categories for the working mathematician (p. 18, ex. 1):
Let $S$ be a fixed set and denote by $X^S$ the set of all functions $S\to X$. Show that $X\mapsto X^S$ is the object function of a functor $\textbf{Set}\to \textbf{Set}$ and that evaluation $e_X:X^S\times S\dot{\to} X$, defined by $e(h,s)=h(s)$, the value of the function $h$ at $s\in S$, is a natural transformation.
I am having problems with both parts of the question. First of all, I am not sure how to show that $X\mapsto X^S$ is the object function. I need to check the two axioms of a functor, the identity property is easy, since for any functor $T:\mathbf{Set}\to\mathbf{Set}$ and a set $A$ we can define $T(\mathrm{id}_A)=\mathrm{id}_{A^S}$. I also need to show that for any composite of morphisms $g\circ f$ we have
$$
T(g\circ f)=Tg\circ Tf
$$
Given a function $f:A\to B$, how to define a function $Tf:A^S\to B^S$? Suppose that $h:S\to A$ is an element of $A^S$, can I define the image of $h$ under $Tf$ to be the function $g=f\circ h$? Does $T$ then satisfy the composition axiom?
Another problem is that a natural transformation is defined for two functors, whereas here I am only given one. What is the second functor?
AI: As you write, you should define $T(f)(h)=f \circ h$ for a function $f: A \rightarrow B$ and $h \in T(A)=A^S$. Then $T(f \circ g)(h)=(f \circ g) \circ h=f \circ (g \circ h)=T(f)(T(g)(h))$. Also, $T(1)=1$ (where $1$ is the identity on $A$).
The other functor is simply the identity $1$, and you are to check that evaluation is a natural transformation from $T$ to $1$. |
H: Find an analytic function with real part $\frac{y}{x^{2}+y^{2}}$
How do I find a analytic function such that $\displaystyle \mathfrak{Re}(f) =u(x,y)= \frac{y}{x^{2}+y^{2}}$.
I can call the real part $u(x,y)$ and by Cauchy-Riemann I will have $u_{x}=v_{y}$ and $u_{y}=-v_{x}$. So $$v_{y}=u_{x}(x,y)= -\frac{2x}{(x^{2}+y^{2})^{2}} \ ; \qquad v_{x}=-u_{y}=\frac{1}{x^{2}+y^{2}}$$
After this what should I do? An elaborate solution will help.
AI: By eyeballing the formula, we have that $y=\Re(-iz)$ and $x^2+y^2=z\overline z$. So the function $$f(z)=\frac{-iz}{z\overline z}=\frac{-i}{\overline z}$$ would work - if only it were analytic.
Now finally observe that complex conjugation does not change the real part, hence taking
$$ f(z)=\overline{\frac {-i}{\overline z}}=\frac{-\overline i}{z}=\frac iz$$
does the trick. |
H: any open ball of radius $2$ is an infinite set?
Is it true that in an infinite metric space, any open ball of radius $2$ is an
infinite set?
for example $\mathbb{R}^2$ with discrete metric we have $d(x,y)=1\forall x\ne y$
so in this case also we have whole $\mathbb{R}^2$ within a ball of radius $2$ right? is my concept okay?
AI: What do you mean by "infinite metric space"? Take $\Bbb Z^2$ with the usual Euclidean metric, and you can see the $2$-ball centered at $0$ contains finitely many elements. |
H: finding overlapping permutations
I have a data set $3\; 4\; 5\; 6\; 7\; 8\; 9$
I want to find all the permutations that can be formed using this such that neither $7$ nor $8$ is adjacent to $9$.
AI: We count the complement, the bad permutations where $7$ or $8$ is adjacent to $9$.
How many permutations have $7$ adjacent to $9$? There are $6$ places for the pair to go, and they can be in either order, giving a total of $(6)(2)$ ways to place $7$ and $9$. The rest can be placed in $5!$ ways, for a total of $(6)(2)(5!)$.
Similarly, there are $(6)(2)(5!)$ bad permutations with $8$ adjacent to $9$.
If we add, we have double-counted the permutations where $7$ and $8$ are both adjacent to $9$. to count these, note that the $9$ can be put in $5$ places. For each of these, there are $2$ ways to place the $7$ and $8$, and $4!$ ways to place the rest. So the number of bad permutations is
$$(6)(2)(5!)+(6)(2)(5!)-(5)(2)(4!).$$ |
H: Evaluating $\int_0^{2 \pi} \frac {\cos 2 \theta}{1 -2a \cos \theta +a^2}$
In order to evaluate $\int_0^{2 \pi} \frac {\cos 2 \theta}{1 -2a \cos \theta +a^2}$ we can define
$$
f(z) := \frac 1 z \cdot \frac { (z^2+z^{-2})/2}{1-2a( \frac {z+z^{-1}} 2) +a^2}
$$ I have $0 < a <1$ which gives singular points in $0$ and $a$ which lie in the unit circle. I want to calculate the residues in those points to use the Residue-formula. How can I evaluate the residues here ?
AI: With
$$f(z)=-\frac1{2a}\frac{z^4+1}{z^2(z-a)(z-a^{-1})}$$
with a single pole (within the unit circle) at $\,z=a\;$ and a double one at $\,z=0$ :
$$\text{Res}_{z=a}(f)=\lim_{z\to a}(z-a)f(z)=-\frac1{2a}\frac{a^4+1}{a-a^{-1}}=-\frac12\frac{a^4+1}{a^2-1}$$
$$\text{Res}_{z=0}(f)=\lim_{z\to 0}\frac d{dz}\left(z^2f(z)\right)=-\frac{a^2+1}{2a^2}$$ |
H: Spectral radius of $A$ and convergence of $A^k$
I'm trying to understand the proof of first theorem here. Maybe it's very simple but I would like your help because I need understand this, I have no much time and my knowledge about this subject is very limited. Some users already have helped-me here and here but I still have doubts.
The statement which I need a explanation is this: "Thus, if $\rho(A)<1$ then $|\lambda_i|<1$".
Matrices $A$ and $J$ have the same eigenvalues, because they are similar. What is relationship between this fact and scalars $\lambda_i$? Is $\lambda_i$ a eigenvalue of A? Why?
Thanks.
AI: As others have noted in comments, your problem seems to be with the understanding of the Jordan normal form. If $A$ has a Jordan normal form $J,$ then each entry on the main diagonal of $J$ is an eigenvalue of $J$ and hence also of $A,$ because $J$ is similar to $A.$ The multiplicity of an eigenvalue is slightly more tricky, but still routine, to calculate. |
H: In Levenberg–Marquardt, is forcing the Hessian to be positive definite OK?
I am often doing parameter estimation using the Levenberg-Marquardt method, which involves solving the following linear system at each step:
$$(H + \lambda I) \delta = r_{i}$$
where $H$ is a square Hessian matrix, $I$ is an identity matrix, $r_{i}$ is residual vector (at $i$-th iteration), $\lambda$ is a damping factor, $\delta$ is improvement step to compute.
The $\lambda$ value is decreased when the step improved solution (reduced objective value) and increased otherwise. The $\lambda$ parameter can allow solving ill-posed problems as it makes the Hessian positive definite. In most cases $H$ is positive definite by itself, but sometimes not. What to do in that case? Should I stop the iteration completely or increase lambda until $H$ becomes positive definite and solve the problem normally?
AI: A similar question has been asked on MO a while ago. The answer is that you should neither stop the iteration, nor increase lambda. You could use a QR decomposition with pivoting, and set very small diagonal elements of R to zero (or use a singular value decomposition if this is a theoretical question). Another suggestion if this is too expensive was to add a small multiple of the identity to $J^TJ$ rather than by multiplying the diagonal elements by $(1+\lambda)$.
This last suggestion actually shows that your presentation of the problem is not accurate. Your are not really solving $(H+\lambda I)\delta=r_{i}$. (In that case, any $\lambda>0$ would make your problem positive definite, because the Hessian $H$ is semi-definite.) Instead, the Levenberg-Marquard method solves
$$(J^T J + \lambda\, \operatorname{diag}(J^T J)) \delta = J^T [y - f(\boldsymbol \beta)]$$
Here, it can indeed happen that the problem stays singular for $\lambda>0$, but increasing $\lambda$ won't help. |
H: Intersection of line and circle to evaluate integral
I have to evaluate $\int \int_{D}^{}x^2+y^2-1dA$ over $D = \left \{(x,y) \in R^2: 0\leq x \leq y,x^2+y^2\leq 1 \right \}$
My problem is defining the integral limits for $x$ and $y$.
I believe that the reagion is the intersection of the circle with the line. Doing that, I got
$$\int_{0}^{\sqrt{\frac{1}{2}}}\int_{0}^{1}x^2+y^2-1dydx$$
Is this correct?
AI: Make a sketch. If you really want to use rectangular coordinates, then $y$ will go from $x$ to $\sqrt{1-x^2}$. The integration is quite feasible.
The natural thing to do, however, is to switch to polar coordinates. So $dx\,dy$ becomes $r\,dr\,d\theta$, and you are integrating from $r=0$ to $r=1$, and $\theta=\frac{\pi}{4}$ to $\theta=\frac{\pi}{2}$. |
H: Exercice on periodic function
Let $f$ be a periodic function, $\mathcal{C}^1$ on $\mathbb{R}$ such that:
$$\displaystyle\int_0^{2 \pi} f(t) \, dt = 0$$
$$f(2 \pi) = f(0)$$
Prove that $$\forall t \in [0,2 \pi]: \int_0^{2 \pi} |f(t)|^2 dt \leq \int_0^{2 \pi} |f'(t)|^2 dt$$
How can we prove this please. I don't have any idea.
AI: Hint: Expand $f$ in Fourier series. |
H: How to calculate $\int_{0}^{2\pi} \sin(x)(\cos x+\sqrt{2-\cos(x)})\;\mathrm dx$ with substitution?
I have to calculate $$\int_{0}^{2\pi} \sin x \cdot(\cos x+\sqrt{2-\cos x})\,\mathrm dx$$ using the substitution $u(x)=2-\cos x$.
What I got so far is:
$$\int_{0}^{2\pi} \sin x\cdot(\cos x+\sqrt{2-\cos x})\,\mathrm dx = \int_{0}^{2\pi} \sin x \cdot(\cos x+\sqrt{u})\,\mathrm dx$$
and with $u'=du/dx$ I get $dx=du/\sin x$, that makes:
$$\int_{0}^{2\pi} (\cos x+\sqrt{u})du$$
Now, integrating $\sqrt u$ isn't the problem, but how to deal with $\cos x$ when it says $du$ instead of $dx$?
AI: Why don't you want to try simple substitution:
$$\int_{0}^{2\pi} \sin x \cdot(\cos x+\sqrt{2-\cos x})\,\mathrm dx=-\int_{0}^{2\pi} (\cos x+\sqrt{2-\cos x})\,\mathrm d\cos(x)=0$$
Because $\cos(0)=\cos(2\pi)=1$
Speaking about you derivations:
$$\int_{0}^{2\pi} (\cos x+\sqrt{u})du$$
And you assumed the substitution $u(x)=2-\cos(x)$, so $\cos(x)=2-u(x)$ and
$$\int_{1}^{1} (2-u +\sqrt{u})du$$ |
H: The wikipedia proof of Bolzano Weierstrass theorem
I was going through the proof that has been written for Bolzano-Weierstrass theorem in the respective Wikipedia page.
http://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass_theorem
I could understand the first part where it assumed the existence of infinite no. of so called peaks but in the part where the finiteness of the peak is assumed I could not understand the logic used after N=n+1. How has it been proven that finite no.of peaks cannot exist or that a convergent sub sequence exists?
Waiting for insightful comments.
AI: This is a pretty cool proof.
In the first part, which you understand, we state that any sequence with infinitely many peaks has a monotonically decreasing subsequence. In the next part, we show that if there are only finitely many peaks, there has to be a monotonically increasing subsequence. The key to this next proof is that if a number is not a peak, then there's some greater entry further along in the sequence. Remember: a member $x_n$ of a sequence is a peak if and only if all of the entries coming afterwards are strictly less than $x_n$.
So, assuming you only have finitely many peaks:
We start with whatever entry comes after the final peak and call this entry $x_{n_1}$. We know that $x_{n_1}$ is not a peak, since we already hit the last peak. Because of this, there is some $x_{n_2}$ that is greater than $x_{n_1}$. $x_{n_2}$ is not a peak, so rinse and repeat and then we have $x_{n_3}$. We can repeat this process since there are no more peaks to produce an infinite subsequence $\{x_{n_j}\}$ that is monotonically increasing.
I hope that clears things up. |
H: Calculate the $r$-derivative of the function $f$
Let $f$ be an analytic function defined by
$$f(s)=g(s)∑_{n=1}^{∞}a_{n}/n^{s}$$
where $∑_{n=1}^{∞}a_{n}/n^{s}$ is an absolute convergent series for $Re(s)>1$.
I have the following question:
Calculate the $r$- th derivative of the function $f$ for $Re(s)>1$, i.e., f^{(r)}(s). The expression of the $g$ is not important in this case.
AI: You will have to use Leibniz's rule that says that $$(f\cdot g)^{(r)}=\sum_{k=0}^r\binom{r}{k}f^{(k)}g^{(r-k)}$$
where $f^{(0)}=f$ and $g^{(0)}=g$, plus the fact that you can differentiate $f$ term-wise. You will have to use $$\left(\frac{d}{ds}\right)^{r}n^{-s}=(-1)^rn^{-s}\log^r n$$
too. |
H: Conditional density function of gamma distributed R.V.'s, ${\Gamma(2,a)}$
I'm stuck on the following problem:
Let $X$ and $Y$ be independent $\Gamma(2,a)$-distributed random variables. Find the conditional distribution of $X$ given that $X+Y=2$.
So the problem is to find $f_{X\mid X+Y=2}(x)$ which from the definition equals $f_{X\mid X+Y=2}(x)=\frac{f_{X+Y=2,X}(2,x)}{f_{X+Y=2}(2)}$. I guess that I could find the distribution of $X+Y$(which I believe also is gamma), but then it would still be a problem of finding $f_{X+Y=2,X}(2,x)$. Is there an easy way of thinking about these kind of problems?
AI: Let $Z=X+Y$, then the density $f_{X,Z}$ of $(X,Z)$ is defined by $f_{X,Z}(x,z)=f_X(x)f_Y(z-x)$ because $X$ and $Y$ are independent hence the conditional distribution of $X$ conditionally on $Z=z$ is proportional to $f_X(x)f_Y(z-x)$, that is,
$$
f_{X\mid Z}(x\mid z)=\frac1{c(z)}f_X(x)f_Y(z-x),\qquad
c(z)=\displaystyle\int f_X(t)f_Y(z-t)\mathrm dt.
$$
In the special case asked for in the question, $f_X(x)=f_Y(x)\propto x\mathrm e^{-ax}\mathbf 1_{x\geqslant0}$, hence
$$
f_X(x)f_Y(z-x)\propto x(z-x)\mathrm e^{-az}\mathbf 1_{0\leqslant x\leqslant z},
$$
and, for every $z\geqslant0$,
$$
c(z)=\mathrm e^{-az}\int_0^z t(z-t)\mathrm dt\propto\mathrm e^{-az}z^3,
$$
that is,
$$
f_{X\mid Z}(x\mid z)=\frac6{z^3}x(z-x)\mathbf 1_{0\leqslant x\leqslant z}.
$$
In other words, $X=UZ$ where $U$ is independent of $Z$ and beta $(2,2)$.
More generally, if $X$ is gamma $(\alpha,a)$ and $Y$ is gamma $(\beta,a)$, then $f_X(x)\propto x^{\alpha-1}\mathrm e^{-ax}\mathbf 1_{x\geqslant0}$ and $f_Y(y)\propto y^{\beta-1}\mathrm e^{-ay}\mathbf 1_{y\geqslant0}$, hence
$$
f_X(x)f_Y(z-x)\propto x^{\alpha-1}(z-x)^{\beta-1}\mathrm e^{-az}\mathbf 1_{0\leqslant x\leqslant z},
$$
and
$$
f_{X\mid Z}(x\mid z)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)z^{\alpha+\beta-1}}x^{\alpha-1}(z-x)^{\beta-1}\mathbf 1_{0\leqslant x\leqslant z}.
$$
In other words, $X=UZ$ where $U$ is independent of $Z$ and beta $(\alpha,\beta)$. |
H: Let $f : \mathbb{R} \to \mathbb{R}$ be a continuous function and $A \subseteq \mathbb{R}$
Let $f : \mathbb{R} \to \mathbb{R}$ be a continuous function and $A \subseteq \mathbb{R}$
(i) If A is connected, is $f^ {−1} (A)$ so?
(ii) If A is compact, is $f^{−1} (A)$ so?
(iii) If A is finite, is $f^ {−1} (A)$ so?
(iv) If A is bounded, is $f^ {−1} (A)$ so?
No, consider $f(x)=(x-1)(x-2), f^{-1}(0)=\{1,2\}$, the rest of three can be countered by constant function right?
AI: Your suggested counterexamples are correct. While continuous functions must map connected sets to connected sets, compact sets to compact sets, and finite sets to finite sets, and uniformly continuous functions must map bounded sets to bounded sets, the same does not hold for preimage. |
H: Diffrential equation of an integral
I read in a paper about a differential equation that I don't understand how.
let $I(y) = \int_0^\infty {g(x,\,y)f(x)dx} $ where $f(.)$ is an Probability Distribution Function and $g(.,\,.)$ is just a function of variables $x$ and $y$. Then we obtain
$$
\frac{{\partial I(y)}}{{\partial y}} = \frac{{\partial g(x,y)}}{{\partial y}} \times f(x)
$$
Can someone explaint why we can have above equation?
If $I(x,\,y) = \int_a^b {g(x,\,y)f(x)dx} $ with any real number $a$ and $b$. Is the above diffrential equation still valid?
AI: No, this is not true. The left side $\dfrac{\partial I(y)}{\partial y}$ does not depend on $x$, but the right side does. Perhaps you mean
$$ \dfrac{\partial I(y)}{\partial y} = \int_0^\infty \dfrac{\partial g(x,y)}{\partial y} f(x)\ dx $$
which (under appropriate conditions) is true.
If that's not it, and this is a published paper, perhaps you could send us a link or reference and we could tell you what it's really saying. |
H: $\vec{u}+\vec{v}-\vec{w},\;\vec{u}-\vec{v}+\vec{w},\;-\vec{u}+\vec{v}+\vec{w} $ are linearly independent if and only if $\vec{u},\vec{v},\vec{w}$ are
I'm consufed: how can I prove that $$\vec{u} + \vec{v} - \vec{w} , \qquad \vec{u} - \vec{v} + \vec{w},\qquad - \vec{u} + \vec{v} + \vec{w} $$ are linearly independent vectors if, and only if $\vec{u}$, $\vec{v}$ and $\vec{w}$ are linearly independent?
Ps: sorry for my poor English!
AI: Hint:$$a(\vec{u} + \vec{v} - \vec{w})+ b (\vec{u} - \vec{v} + \vec{w})+c( - \vec{u} + \vec{v} + \vec{w})=0\\
\implies(a+b-c)\vec{u}+(a-b+c)\vec{v}+(-a+b+c)\vec{w}=0$$ then we have $$a+b-c=0 ,a-b+c=0 ,-a+b+c=0$$ From which we can easily conclude $a=b=c=0$. |
H: What's the solution of a differential equation, when the "limit" is 0
Sorry if the questions sounds horrible to all the mathematicians' ears out there, but my math level is limited, and I just try to get a better intuitive idea of what happens in that case.
I "understand" the concept of differential. It's the change of a function over a certain value (for example the change of speed over time): dV / dt and dt tends to 0. Now my question is, what happens when dt = 0, is it a valid case (can this happens? can it be said), and if yes, is it somehow connected to the Dirac delta function. Finally a friend of mine told me this has somehow a connection with something called the "limit theorem" in differential calculus? I found some info about a "limit theorem" but that doesn't seem to be connected at all to this.
Your help is greatly appreciated, and sorry again if I am not using the right terminology. Please correct me, I am here to learn.
Thank you -
AI: Consider the curve $y = x^2$, and the point $(2,4)$ on that curve. What's the slope of that curve at that point?
Well, if I have two points $(x_1, y_1)$ and $(x_2, y_2)$, the slope of the line connecting them is $\dfrac{y_2-y_1}{x_2-x_1}$. But of course we only have one point here. Let's move along the curve very slightly: let's say that we move from $x = 2$ to $x = 2+\delta$ (where $\delta$ is something really small), so that $y (= x^2)$ is now $(2+\delta)^2$, i.e. $4 + 4\delta + \delta^2$. (Draw yourself a picture of these two points on the curve, and the line through these points.)
The slope between the points $(2,4)$ and $(2+\delta, 4+4\delta+\delta^2)$ is $\dfrac{(4+4\delta+\delta^2) - 4}{(2+\delta) - 2}$, which you can easily simplify to $4+\delta$. So what's the slope at $(2,4)$? Well, it's what happens when $\delta$ tends to 0 (this is formally called a limit) - that is, we'll get $4$. But can we just set $\delta = 0$ right from the start? No - because then we get a 'slope' of $\dfrac{4-4}{2-2}$, which obviously doesn't make any sense.
Now do this again for two points $(x, x^2)$ and $((x+\delta), (x+\delta)^2)$. The slope function you get at the end is called the derivative of $y = x^2$, and often written $dy/dx$, but these $d$ things are just formal symbols, and you shouldn't think of them as numbers. It's just code for what I wrote above.
Does this help? |
H: First isomorphism theorem for topological groups
Let $f$ a continuous homomorphism from a topological group $G$ onto a topological group $H$. We denote $K = Ker(f)$.
I already proved that $\overline{f}:G/K\to H$ defined by $\overline{f} (xK)=f(x)$ is an algebraic isomorphism and continuous.
Now, I suppose that $f$ is open.
How can I prove that $\overline{f}$ is a homeomorphism? First, I tried to prove it is open, but I failed.
Any hint? Thanks.
AI: By definition, $\bar{f}$ is the unique continuous map such that $\bar{f}\circ\pi=f$, where $\pi:G\rightarrow G/K$ is the canonical projection.
Let $V\subseteq G/K$ be open, i.e., $\pi^{-1}(V)\subseteq G$ is open. Then, because $\pi$ is surjective, $\pi(\pi^{-1}(V))=V$, so that
$\bar{f}(V)=\bar{f}(\pi(\pi^{-1}(V)))=f(\pi^{-1}(V))$.
So, since $\pi$ is continuous and $f$ is open, $\bar{f}(V)=f(\pi^{-1}(V))$ is open, so $\bar{f}$ is indeed an open map. |
H: How can i solve this separable differential equation?
Given Problem is to solve this separable differential equation:
$$y^{\prime}=\frac{y}{4x-x^2}.$$
My approach: was to build the integral of y':
$$\int y^{\prime} = \int \frac{y}{4x-x^2}dy = \frac{y^2}{2(4x-x^2)}.$$
But now i am stuck in differential equations, what whould be the next step? And what would the solution looks like? Or is this already the solution? I doubt that.
P.S. edits were only made to improve language and latex
AI: That's not the way to solve separable equations, this is the general procedure:
$$\frac{dy}{dx}=\frac{y}{4x-x^2}$$
$$\frac{dy}{y}=\frac{dx}{4x-x^2}$$
Now that's what you integrate:
$$\int\frac{dy}{y}=\int\frac{dx}{4x-x^2}$$
The left one is immediate, the second one can be done by separating the fraction into two fractions as 1/x and 1/(4-x), which yields to two more logarithms:
$$4\log y + C = \log(x)-\log(x-4)$$
$$y = C\left(\frac{x}{x-4}\right)^\frac{1}{4}$$ |
H: If $\theta\in\mathbb{Q}$, is it true that $(\cos \theta + i \sin \theta)^\alpha = \cos(\alpha\theta) + i \sin(\alpha\theta)$?
Is the following true if $\theta\in\mathbb{Q}$?
$$(\cos \theta + i \sin \theta)^\alpha = \cos(\alpha\theta) + i \sin(\alpha\theta)$$
Is it true if $\alpha\in\mathbb{R}$? In each case, prove or give a counterexample, whichever is applicable.
I am not able to guess except about the de Moivre's theorem.
AI: It is not true in general.
$$
(\cos \theta + i \sin \theta)^\frac 12 = \left [ \begin{array}{l}
\cos \frac \theta 2 + i \sin \frac \theta 2 \\
\cos \left ( \frac \theta 2 + \pi \right ) + i \sin \left ( \frac \theta 2 + \pi \right )
\end{array}\right .
$$ |
H: Integration exercise: $ \int \frac{e^{5x}}{ (e^{2x} - e^x - 20) }dx$
I have trouble integrating:
$$ \int \frac{e^{5x}}{e^{2x} - e^x - 20} dx$$
With $t=e^x$, I've rewritten it as:
$$\int \frac{t^5}{t^2 - t - 20} \frac{1}{t} dt$$
Then I tried integration by parts, but I am not any closer to the solution.
AI: $$\int \frac{t^5}{t^2 - t - 20} \frac{1}{t} dt$$
$$\int \frac{t^4}{t^2 - t - 20} dt$$
try to make proper fraction
$$\int (t^2+t+21)+\frac{41t+420}{t^2 - t - 20} dt$$
$$\int (t^2+t+21)\;dt+\int \frac{41t+420}{t^2 - t - 20} dt$$
first integration is basic and for second use partial fraction
$$\dfrac{t^3}{3}+\dfrac{t^2}{2}+21t+\int \dfrac {625}{9(t-5)}-\dfrac{256}{9(t+4)}\;\;dt$$
$$\dfrac{t^3}{3}+\dfrac{t^2}{2}+21t+\dfrac {625}{9}\log (t-5)-\dfrac{256}{9}\log (t+4)$$
put $t=e^x$
$$\mathbf {Answer}=\dfrac{e^{3x}}{3}+\dfrac{e^{2x}}{2}+21e^x+\dfrac {625}{9}\log (e^x-5)-\dfrac{256}{9}\log (e^x+4)+C\;$$
solution of partial fraction used above
$$\dfrac{41t+420}{t^2 - t - 20}=\dfrac {A}{t-5}+\dfrac{B}{t+4}$$
$${41t+420}={A}(t+4)+B({t-5})$$
put $t=5$ and then $t=-4$
$${41\times 5+420}={A}(5+4)+B({5-5})$$
$$A=\dfrac {625}{9}$$
$${41\times (-4)+420}={A}(-4+4)+B({-4-5})$$
$$B=\dfrac {-256}{9}$$
$$\dfrac{41t+420}{t^2 - t - 20}=\dfrac {625}{9(t-5)}-\dfrac{256}{9(t+4)}$$ |
H: Proving that a set over a field is a vector space
Given: $S$ is a nonempty set, $K$ is a field. Let $C(S, K)$ denote the set of all functions ${f}\in\ C(S,K)$ such that ${f}(s) = 0 $ for all but a finite number of elements of $S$. Prove that $C(S, K)$ is a vector space.
OK. I was thinking about using the simple additive axioms that define vector spaces. One of those is that there exist two elements such that $x$ (which is some vector) added to zero equals $x$, or $x + 0 = x$.
Let $g(s)$ be an arbitrary function. $f + g = g$ when $f(s) = 0$. In addition, if we assume $g(s)$ to be in the space $C(S,K)$ and $f + g = g$ then both vectors are in the space $C(S,K)$ and are closed under addition.
Am I on the right track here? I feel like there's another step I need to have.
AI: This is a very important construction in linear algebra. Given a set $S$ and a field $\Bbb F$ we can consider all functions $f : S \to \Bbb F$ such that $f(x) = 0$ unless on finetely many points of $S$. A function like that is said to have finite support. If we define addition and multiplication by scalar pointwise, then the set of all such functions form a vector space. All the axioms are trivially satisfied, the only one that may be tricky is the axiom of closure. Let us denote this set $F(S)$ (I'll change the notation for the set) and let's try to show that with these operations $F(S)$ is closed under linear combinations.
For that matter, consider $f_1, f_2 \in F(S)$ and $\lambda_1, \lambda_2 \in \Bbb F$. Then we want to show that we have $\lambda_1 f_1 + \lambda_2 f_2 \in F(S)$. The idea is: there is a finite subset $S_1 \subset S$ such that $f_1$ is nonzero only there, and there is a finite set $S_2 \subset S$ such that $f_2$ is nonzero only there. The only place in $S$ where $\lambda_1 f_1 + \lambda_2 f_2$ can be nonzero is inside $S_1 \cup S_2$, because outside of it both functions are both zero. But this is a union of two finite sets, hence finite, and so the linear combination is also in $F(S)$.
Now, just to give you the notion of the importance of that, consider $\delta_a \in F(S)$ the function defined by:
$$\delta_a(x) = \begin{cases}1, & x=a \\ 0, & x\neq a\end{cases}$$
This function indicates wether the point $x$ is $a$ or not. If we consider $i : S \to F(S)$ given by $i(a) = \delta_a$ the set $i(S)$ will be a basis for $F(S)$ (try proving this). Since $\delta_a$ indicates wether a point is or not $a$, we may say that $\delta_a$ represents $a$ inside of $F(S)$. In that case, we will have a basis that intuitively we can think of as formed by the elements of $S$. So, when we have some arbitrary set, we can always construct a vector space from it that intuitively has the set $S$ as a basis, and we call this vector space the free vector space in terms of $S$ and denote it $F(S)$.
Edit: I've to define addition and multiplication by scalar pointwise. Well, this is a term used for the most common definition of these operations. We simply set:
$$(f+g)(x)=f(x)+g(x) \quad \forall x \in S$$
$$(\lambda f)(x) = \lambda f(x) \quad \forall x \in S$$
Whenever some operation is defined like that we say that it is defined pointwise. |
H: algebraic-geometric interpretation of the principal ideal theorem
This quote is from Matsumura's Commutative Ring Theory, page 100: "The principal ideal theorem corresponds to the familiar and obvious-looking proposition of geometrical and physical intuition (which is strictly speaking not always true) that 'adding one equation can decrease the dimension of the space of solutions by at most one'."
One consequence of the principal ideal theorem is that if $P$ is a prime ideal of height $r$ in a Noetherian ring $A$, then $P$ is the minimal prime divisor of some ideal $I=(a_1,\cdots,a_r)$ and that the height of $P/(a_1,\cdots,a_i)$ is precisely equal to $r-i$.
Question 1: How can we make a connection between the quote from Matsumura and the above consequence of the principal ideal theorem? Which is the solution space?
Question 2: Are there any examples where adding one equation drops the dimension by more than 1?
AI: The solution space associated with a function $f$ is, of course, the zero-set $\{ x : f (x) = 0 \}$, or more formally, $\{ \mathfrak{p} \in \operatorname{Spec} A : f \in \mathfrak{p} \}$.
Consider $A = \mathbb{C} [x,y,z] / (x z, y z, z^2 - z)$. This is a 2-dimensional noetherian ring: its spectrum is the disjoint union of the affine plane and a point. Of course, if we look at the equation $z - 1 = 0$, we end up with a 0-dimensional ring (namely $\mathbb{C}$). |
H: Product-to-sum formulas?
My old pre-calculus book says:
$$\sin u\cos v=\frac{1}{2}[\sin (u+v)+\sin(u-v)]$$
and $$\cos u \sin v=\frac{1}{2}[\sin(u+v)-\sin(u-v)]$$ I don't understand why there is a difference, since multiplication is commutative. Can anyone help? Thanks!
AI: Note simply that $\sin (u+v) = \sin (v+u)$ and $\sin (u-v)=-\sin (v-u)$.
The role of $u$ and $v$ in the two equations you quote has been swapped on the left hand side, but not on the right hand side - hence the change of sign. If the order is swapped on the right hand side of the second equation, so is the sign. |
H: Group $G$ of order $p^2$: $\;G\cong \mathbb Z_{p^2}$ or $G\cong \mathbb Z_p \times \mathbb Z_p$
If the order of $G$ is $p^2$ then how do I show that $G$ is isomorphic to $\mathbb Z_{p^2}$ or $\mathbb Z_p\times\mathbb Z_p$.
AI: Hint:
Argue that $G$ must be abelian (why?)
Then use the Fundamental Theorem of Finitely Generated Abelian Groups to prove that any abelian groups of order $p^2$ must necessarily be isomorphic to one of the two groups $\mathbb Z_{p^2}$ or $\mathbb Z_p\times \mathbb Z_p$, which are non-isomorphic groups, since $\mathbb Z_m\times \mathbb Z_n \cong \mathbb Z_{mn} \iff \gcd(m,n) = 1$, and clearly, $\gcd(p, p) = p \neq 1$. |
H: Compactifications of limit ordinals
I thought I knew that but it seems I don't.
Let $\alpha$ be a countable, limit ordinal $\alpha>\omega$. Give $\alpha$ its order topology. What is the Stone-Čech compactification of $\alpha$? Is there any reason why it should be $\beta \omega$?
AI: If $\alpha>\omega$, then the subspace $\omega+1$ of $\alpha$ is compact. It is therefore a compact subset of $\beta\alpha$ and hence a closed subset. But every infinite closed subset of $\beta\omega$ contains a copy of $\beta\omega$, so $\beta\omega$ contains no set homeomorphic to $\omega+1$. Thus, $\beta\alpha$ cannot be homeomorphic to $\beta\omega$.
If $\alpha=\beta+\omega$ for some limit ordinal $\beta$, then $\alpha$ is homeomorphic to the disjoint union of $\beta+1$ and $\omega$, and since $\beta+1$ is compact, $\beta\alpha$ is homeomorphic to $(\beta+1)\sqcup\beta\omega$.
It’s not clear to me just what happens when $\alpha$ is more complicated, even for $\omega^2$. |
H: Automorphism that saves all subgroups of gruop.
Let $h\in$Aut($G$) so that it saves subgroups: $h(U)=U$ for each subgroup $U$ of $G$, and $\alpha$ is any automorphism. Is it true that $\alpha h \alpha^{-1}$ also saves subgroups?
AI: Hints:
$$\forall\,U\le G\;,\;\;\alpha^{-1}(U)\le G\implies h\alpha^{-1}(U)=\alpha^{-1}(U)\implies\ldots\ldots$$ |
H: The limit of $\lim\limits_{x \to \infty}\sqrt{x^2+3x-4}-x$
I tried all I know and I always get to $\infty$, Wolfram Alpha says $\frac{3}{2}$. How should I simplify it?
$$\lim\limits_{x \to \infty}\sqrt{(x^2+3x+4)}-x$$
I tried multiplying by its conjugate, taking the squared root out of the limit, dividing everything by $\sqrt{x^2}$, etc.
Obs.: Without using l'Hôpital's.
AI: Note that
\begin{align}
\sqrt{x^2+3x-4} - x & = \left(\sqrt{x^2+3x-4} - x \right) \times \dfrac{\sqrt{x^2+3x-4} + x}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{(\sqrt{x^2+3x-4} - x)(\sqrt{x^2+3x-4} + x)}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{x^2+3x-4-x^2}{\sqrt{x^2+3x-4} + x} = \dfrac{3x-4}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1}
\end{align}
Now we get
\begin{align}
\lim_{x \to \infty}\sqrt{x^2+3x-4} - x & = \lim_{x \to \infty} \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1}\\
& = \dfrac{3-\lim_{x \to \infty} 4/x}{1 + \lim_{x \to \infty} \sqrt{1+3/x-4/x^2} } = \dfrac{3}{1+1}\\
& = \dfrac32
\end{align} |
H: Inequalities between chromatic number and the number of vertices
I am currently doing exercises from graph theory and i came across this one that i can't solve. Could anyone give me some hints how to do it?
Prove that for every graph G of order $n$ these inequalities are true:
$$2 \sqrt{n} \le \chi(G)+\chi(\overline G) \le n+1$$
AI: To prove that $\chi(G)+\chi(\overline G)\ge2\sqrt n$: Suppose $G$ and $\overline G$ have been colored with $\chi(G)$ and $\chi(\overline G)$ colors, respectively. Map each vertex to an ordered pair of colors, namely, $v\mapsto ($color of $v$ in $G,$ color of $v$ in $\overline G)$. Note that this map is injective, showing that $n\le\chi(G)\chi(\overline G)$, whence $\sqrt n\le\sqrt{\chi(G)\chi(\overline G)}\le\frac{\chi(G)+\chi(\overline G)}2$ by the inequality between the geometric and arithmetic means.
The inequality $\chi(G)+\chi(\overline G)\le n+1$ can be proved induction on $n$. Let $G$ be a graph of order $n+1$ and let $v\in V(G)$. By the induction hypothesis, $\chi(G-v)+\chi(\overline G-v)\le n+1$; we have to show that $\chi(G)+\chi(\overline G)\le n+2$. Inasmuch as $\chi(G)\le\chi(G-v)+1$ and $\chi(\overline G)\le\chi(\overline G-v)+1$, the conclusion clearly holds if either $\chi(G)=\chi(G-v)$ or $\chi(\overline G)=\chi(\overline G-v)$. On the other hand, if $\chi(G)>\chi(G-v)$ and $\chi(\overline G)>\chi(\overline G-v)$, then $v$ must be joined to $\chi(G-v)$ many vertices by edges of $G$, and to $\chi(\overline G-v)$ many vertices by edges of $\overline G$, which implies that $\chi(G-v)+\chi(\overline G-v)\le n$ and so $\chi(G)+\chi(\overline G)\le n+2$. |
H: Finding distance in Hilbert space
How to calculate $d(e_1,L)$, where $e_1=(1,0,0,\ldots)$ and $L=\left\{x\in l^2\mid x=(\xi_j)_{j=1}^\infty,\sum_{j=1}^n\xi_j=0\right\}$.
Thanks in advance.
AI: For $n \in \mathbb N$ and let $x = (1, -\frac 1n, \ldots, -\frac 1n, 0,\ldots)$, then $x \in L$ and
\begin{align*}
\|x-e_1\|^2 &= \sum_{i=1}^n \frac 1{n^2}\\
&= \frac 1n\\
&\to 0
\end{align*}
Hence $d(e_1, L) = 0$. |
H: Trying to show a measure is inner regular on open sets
Background:
Let $X$ be a locally compact Hausdorff space, $C_{00}$ the collection of continuous functions on $X$ with relatively compact support. Then let $I \in C_{00}^*$ such that $f \in C_{00}$ nonnegative implies $I(f)\geq 0$.
Define for nonnegative lower semi continuous $g$, $$J(g) = sup(I(f): f\in C_{00}, 0\leq f \leq g)$$.
For general nonnegative $g$, $$K(g) = inf(J(f) : f\text{ lower semi continuous, }g \leq f)$$
My question:
Let $U \subset X$ be open.
Let $b < K(\chi_U)$. Next line chooses a $f\in C_{00}, f\geq 0$ such that $I(f) > b$ and $f \leq \chi_U$. I cannot verify that this is possible from the definitions above. How can it be done?
AI: $\chi_U$ is lower semicontinuous: Let $x \in X$ and $\epsilon > 0$. If $x \in U$, then $\chi_U$ is continuous at $x$ (as constant on the neighbourhood $U$ of $x$), so let $x \not\in U$. Then for each $y \in X$, we have $\chi_U(y) \ge \chi_U(x) - \epsilon = -\epsilon$. So $\chi_U$ is lower semicontinuous on $X \setminus U$ also.
Hence $K(\chi_U) = J(\chi_U)$ and the existence of $f$ follows from the definition of $J$. |
H: Contour integral for $\int _0^\infty \frac{t^2+1}{t^4+1} dt$
I know that the four singularities for $\int _0^\infty \frac{t^2+1}{t^4+1} dt$ are
$\pm \frac{\sqrt{2}}{2} \pm i \frac{\sqrt{2}}{2}$. Also, since the function is even, I can calculate $\frac{1}{2} \int _{-\infty}^\infty \frac{t^2+1}{t^4+1} dt$ instead.
If I use semicircle as a contour, I can put this into
$\int _{-\infty}^\infty \frac{t^2+1}{t^4+1} dt = \int _C \frac{z^2+1}{z^4+1} dz + \lim _{a\rightarrow \infty}\int _{-a}^{a} \frac{z^2+1}{z^4+1} dz$
I can handle the first term by using partial fractions with singularities specified above, but I am quite confused with how to show that the second term goes to zero. Would you mind giving me any help? Just to make sure, this is for personal exercise and not a homework question.
AI: Your notation is a little confusing, but I assume $C$ is the semi-circle of radius $a$. The integral over $C$ tends to $0$ as $a \to \infty$, since
$$
\left| \int_C \frac{z^2+1}{z^4+1}\,dz \right| \le \pi a \max_{C}
\left| \frac{z^2+1}{z^4+1} \right| \le \pi a \frac{a^2+1}{a^4-1}
$$
which goes to $0$ as $a\to\infty$. |
H: $\lim_{n\rightarrow\infty}\sup(a_n+b_n) = \lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup b_n$ for all bounded sequences $(b_n)$
Let $(a_n)$ be a bounded sequence. Suppose that for every bounded sequence $(b_n)$ we have $\lim_{n\rightarrow\infty}\sup(a_n+b_n) = \lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup b_n$. Prove that $(a_n)$ is convergent.
We can take $b_n=-a_n$ to get $\lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup (-a_n) = 0$. For any subsequence of $a_n$ which converges to $L$, the corresponding subsequence of $-a_n$ converges to $-L$. How can we conclude from here?
AI: Hint:
$$\lim_{n\to\infty}\sup (-a_n)=-\lim_{n\to\infty}\inf a_n.$$ |
H: Prove: $D_{8n} \not\cong D_{4n} \times Z_2$.
Prove $D_{8n} \not\cong D_{4n} \times Z_2$.
My trial:
I tried to show that $D_{16}$ is not isomorphic to $D_8 \times Z_2$ by making a contradiction as follows:
Suppose $D_{4n}$ is isomorphic to $D_{2n} \times Z_2$, so $D_{8}$ is isomorphic to $D_{4} \times Z_2$. If $D_{16}$ is isomorphic to $D_{8} \times Z_2 $, then $D_{16}$ is isomorphic to $D_{4} \times Z_2 \times Z_2 $, but there is not Dihedral group of order $4$ so $D_4$ is not a group and so $D_{16}\not\cong D_8\times Z_2$, which gives us a contradiction. Hence, $D_{16}$ is not isomorphic to $D_{8} \times Z_2$.
I found a counterexample for the statement, so it's not true in general, or at least it's not true in this case.
__
Does this proof make sense or is it mathematically wrong?
AI: $D_{8n}$ has an element of order $4n$, but the maximal order of an element in $D_{4n} \times \mathbb{Z}_2$ is $2n$. |
H: Question about finding minimum-Hilbert spaces
How to find $$\min_{a,b,c\in\mathbb{C}}{\int_0^{\infty}} |a+bx+cx^2+x^3|^2 e^{-x} dx = ?$$ Thanks in advance.
AI: $\langle f,g \rangle = \int\limits_0^{\infty} f \bar{g}e^{-x}dx$ is scalar product (for $f,g \in L^2(e^{-x}dx)$), so we have to find squared distances of vector $x^3$ from subspace generated by $1,x,x^2$ (it is true that $1,x,x^2,x^3 \in L^2(e^{-x}dx)$).
Let $x^3=k+lx+mx^2+f, f \perp \mathcal{L}\{1,x,x^2\}$. Scalar multiplication by $1,x,x^2$ it follows that $k+l+2m=6,k+2l+6m=24,2k+6l+24m=120$, so $k=6, l=-18, m=9$ and the required minimum is square norm vector $x^3-6+18x-9x^2$ in the upper scalar product, that is $$\min_{a,b,c \in \textbf{C}} \int\limits_{0}^{\infty} |a+bx+cx^2+x^3|^2e^{-x}dx=\int\limits_{0}^{\infty} |x^3-6+18x-9x^2|^2e^{-x}dx=36.$$ |
H: How can I evaluate $\lim_{x\to7}\frac{\sqrt{x}-\sqrt{7}}{\sqrt{x+7}-\sqrt{14}}$?
I need to evaluate the following limit, if it exists:
$$\lim_{x\to7}\frac{\sqrt{x}-\sqrt{7}}{\sqrt{x+7}-\sqrt{14}}$$
How can I solve it without using differentiation or L'Hôpital?
AI: We can get a nice function whose limit will not be indeterminate, by multiplying numerator and denominator each by the conjugate of $\sqrt{x + 7} - \sqrt{14}$:
$$\lim_{x\to7}\frac{\sqrt{x}-\sqrt{7}}{\sqrt{x+7}-\sqrt{14}}\cdot\dfrac{(\sqrt{x+7} + \sqrt{14})}{(\sqrt{x+7} + \sqrt{14})}$$
In the denominator, we obtain a difference of squares:
$(\sqrt{x + 7} - \sqrt{14})(\sqrt{x + 7} + \sqrt{14})\; =\; (x + 7) -14 \; =\; x-7 $
$$\lim_{x\to7}\frac{(\sqrt x-\sqrt 7)(\sqrt{x+7} + \sqrt{14})}{(x - 7)}$$
Hmmm...this doesn't seem to help us any, since both the numerator AND denominator will still evaluate to $0$.
But wait! We can view the term in the denominator as a difference of squares: $$(x - 7) = (\sqrt x)^2 - (\sqrt 7)^2 = (\sqrt{x} - \sqrt 7)(\sqrt x + \sqrt 7)$$
That gives us:
$$\lim_{x\to7}\dfrac{(\sqrt x-\sqrt 7)(\sqrt{x+7} + \sqrt {14})}{(\sqrt{x} - \sqrt 7)(\sqrt x + \sqrt 7)}$$
Canceling the common factor in numerator and denominator, we have (provided $x \neq 7$):
$$\lim_{x\to7}\dfrac{(\sqrt{x+7} + \sqrt {14})}{(\sqrt x + \sqrt 7)} = \frac {2\sqrt{14}}{2 \sqrt 7} = \sqrt{\frac {14}{7}} = \sqrt 2$$
which can now be evaluated without problems. We are taking the limit as $x \to 7$ after all, and so, while the original function is undefined at $x = 7$, the limit as $x \to 7$ indeed exists. |
H: Complex function integral
I have the function $f : D_f \subset\mathbb{C} \rightarrow \mathbb{C} $ defined by
$$f(z) = \frac{1}{(z-1)(z^2+2)}, z \subset D_f$$
where $D_f$ is the domain of $f$.
How do I calculate
$$\oint_\gamma f(z)\,dx
$$
where $\gamma$ is circunference with center $-1$, radius $1$ and positive orientation?
AI: The function $f$ is holomorphic except at $z = 1$ and $z = \pm i\sqrt 2$. None of these points lie inside $\gamma$. Cauchy's integral theorem shows that the integral is $0$. |
H: Finding vectors in $\mathbb R^n$ with Euclidean norm 1
I have a couple of questions here which ask:
Find two vectors in $\mathbb R^2$ with Euclidean norm 1, whose Euclidean inner product with (3, -1) is zero.
and
Show that there are infinitely many vectors in $\mathbb R^3$ with Euclidean norm 1 whose Euclidean inner product with (1,-3, 5) is zero.
Can anyone explain how to solve these types of problems in general? My textbook doesn't show the solution for these types.
AI: Hint: You are looking for a vector $(x,y)\in \mathbb R^2$ such that $\|(x,y)\|=1$ and $(x,y)\cdot (3,-1)=0$. The latter condition reads: $3x-y=0$. Now, the unity norm condition reads $x^2+y^2=1$. So, you two equations that you need to solve. A similar approach to problem 2 will furnish a solution. Note that drawing things may help you visualize what the algebra is encoding. |
H: Why is $X^4-16X^2+4$ irreducible in $\mathbb{Q}[X]$?
Determine whether $X^4-16X^2+4$ is irreducible in $\mathbb{Q}[X]$.
To solve this problem, I reasoned that since $X^4-16X^2+4$ has no rational roots hence irreducible.
But there is a hint to this question that uses different approach:
Try supposing it is reducible, then it must factor into a product of two monic quadratic polynomials with integer coefficients. Then show that it is impossible, then conclude the original polynomial is irreducible.
My questions regarding the hints:
1. Why can't we just show that since $X^4-16X^2+4$ has no rational roots hence irreducible?
2. Why do we have to factorise into a product of two monic quadratic polynomials with integer coefficients? Why monic? And why can't we factorise into polynomial with degree 1 and 3? Also, lastly why the coefficients have to be integers?
Thank you for any explanations!
Edit: Thanks for all the answers.
I saw that Second Gauss Lemma is used. But I only learned the first one in class. Which is:
Let $R$ be a Unique Facorisation Domain. If $f,g\in\mathbb{R}[X]$ are primitive, then so too is their product $fg$.
Is it inevitable to use Second Gauss Lemma? Is there any other way around?
AI: Answer to question 1: Consider de polynomial $(x^2+1)^2\in\Bbb Q[x]$. It has no rational roots and it is reducible over $\Bbb Q$. Your reasoning fails.
The absence of roots only guarantees that there are no factors of degree $1$.
Your polynomial has degree $4$, since it has no rational roots, it has no linear factors (that is factors of degree $1$), but it could have two factors of degree $2$.
Assume it does and try to reach a contradiction. If you do, then it is irreducible over $\Bbb Q$.
Answer to question 2: The factors don't need to be monic polynomials, but they can be. If you find a factorization in which the factors aren't monic, you just have to multiply by a certain constant to make the factors monic.
Answer to question 3: If you find a factorization $(x-\alpha )q(x)$, where $q(x)$ is a polynomial with rational coefficients of degree $3$, then the polynomial will have a rational root, namely $\alpha$ and you have estabalished it doesn't.
Answer to question 4: The coefficients need not be integers, but (the second) Gauss's lemma allows us to assume the coefficients are integers, making the calculations much simpler. |
H: "Rules of inference" when the last premise is a conditional?
Another very basic Discrete Mathematics homework problem. I don't want the answer as much as I want to understand the question:
Problem 7
For each of the following sets of premises, what relevant conclusion(s) can be reached? Explain which rules of inference are used.
a) "If I play hockey, then I am sore the next day", "I use the whirlpool if I am sore", "I did not use the whirlpool"
b) "I am dreaming or hallucinating", "I am not dreaming", "If I am hallucinating, I see elephants smoking"
Okay, now my problem is with b, which ENDS with a conditional. I'm pretty confident that I already got a) correct, so let's look at b):
$p$: I am dreaming
$q$: I am hallucinating
$r$: I see elephants smoking
According to the question, we have:
$p$ V $q$
~$p$
$q\rightarrow r$
The top two premises can be shortened to simply $q$ via "disjunctive syllogism":
$q$
$q \rightarrow r$
So...which rule can you use to draw any conclusions from the above, and what is the conclusion?
Using a truth table, if we look at the row where $q$ AND $q\rightarrow r$ are true, this means that $r$ must be true. So...is the conclusion $r$? But what rule is that?
AI: You are correct in your application of the Disjunctive Syllogism in part (b). That gives you the derived premise $q$. Now, you can use Modus Ponens and note that from $q$ together with $q\rightarrow r$, we derive that $r$ holds.
Modus Ponens:
$$\begin{align} &\text{Modus Ponens }\\
\hline \\
& q \rightarrow r & q\\
& q & q\rightarrow r\\
\hline \\
\therefore & r &\therefore r\end{align}$$
The argument can be written as $\;q, \;(q\rightarrow r) \models r\;\;$ or as $\;(q\rightarrow r),\;q \models r$ |
H: Estimating the rate of convergence of an integral
I'm studying the integral $\displaystyle\int_0^w \frac{s\mathrm ds}{(e^s+1)\sqrt{1-(s/w)^2}}$ as $w\to\infty$. The intuition suggests that this integral converges to $\displaystyle\int_0^\infty \frac{s\mathrm ds}{ e^s+1 }=\frac{\pi^2}{12}$, because the singularity in $s=w$ is integrable but the obtained mass goes to zero thanks to the exponential, and everywhere else the integrands are close (none of this is formal, of course).
To the more formal approach. We split the integral:
$\displaystyle\int_{w/2}^w \frac{s\mathrm ds}{(e^s+1)\sqrt{1-(s/w)^2}}\le \frac{w^2}{2\sqrt{2}(e^{w/2}+1)}\int_{1/2}^1 \frac{ \mathrm ds}{ \sqrt{1- s }}=\frac{w^2}{ 2(e^{w/2}+1)} \to 0$ as $w\to\infty$
and
$\displaystyle\int_0^{w/2} \frac{s\mathrm ds}{(e^s+1)\sqrt{1-(s/w)^2}} $ which converges to $\displaystyle\int_0^\infty \frac{s\mathrm ds}{ e^s+1 }$ by Dominated Convergence Theorem.
Now I'd like to estimate the rate of convergence with all constants in closed form (the knowledge of asymptotic behavior is not sufficient, unfortunately). Is there a method to achieve it? Maybe, with some trickier splitting of the integral?
I'd be glad to hear all suggestions.
AI: Using the identity $\dfrac1{\sqrt{1-t}}=1+\dfrac{t}{(1+\sqrt{1-t})\sqrt{1-t}}$, one gets
$$
\int_0^w \frac{s\mathrm ds}{(\mathrm e^s+1)\sqrt{1-(s/w)^2}}=I_\infty+\frac1{w^2}J(w)-K(w),
$$
with
$$
I_\infty=\int_0^\infty \frac{s\mathrm ds}{ \mathrm e^s+1 },\qquad
K(w)=\int_w^\infty \frac{s\mathrm ds}{ \mathrm e^s+1 }.
$$
and
$$
J(w)=\int_0^w \frac{s^3\mathrm ds}{(\mathrm e^s+1)\left(1+\sqrt{1-(s/w)^2}\right)\sqrt{1-(s/w)^2}}.
$$
Arguments similar to the ones used in the question show that $J(w)\to J_\infty$ with
$$
J_\infty=\int_0^\infty \frac{s^3\mathrm ds}{2(\mathrm e^s+1)}.
$$
Finally, $K(w)\ll\dfrac1{w^2}$ hence
$$
\lim_{w\to\infty}w^2\cdot\left(\int_0^w \frac{s\mathrm ds}{(\mathrm e^s+1)\sqrt{1-(s/w)^2}}-I_\infty\right)=J_\infty.
$$
Edit: To get an upper bound on $J$, note that $\left(1+\sqrt{1-(s/w)^2}\right)\sqrt{1-(s/w)^2}\geqslant\frac32$ and $\mathrm e^s+1\gt\mathrm e^s$ for every $s\leqslant\frac12w$, and that $s^2\lt w^2$, $\mathrm e^s+1\gt\mathrm e^{w/2}$ and $1+\sqrt{1-(s/w)^2}\gt1$ for every $\frac12w\leqslant s\leqslant w$. Hence,
$$
J(w)\leqslant\frac23\int_0^{w/2}s^3\mathrm e^{-s}\mathrm ds+w^2\mathrm e^{-w/2}\int_{w/2}^w\frac{s\mathrm ds}{\sqrt{1-(s/w)^2}},
$$
which, using the change of variable $s\to ws$ in the last integral, implies that
$$
J(w)\leqslant\frac23\int_0^{+\infty}s^3\mathrm e^{-s}\mathrm ds+\max\{w^4\mathrm e^{-w/2};w\gt0\}\cdot\int_{1/2}^1\frac{s\mathrm ds}{\sqrt{1-s^2}},
$$
that is, $J(w)\leqslant\frac23\cdot6+8^4\mathrm e^{-4}\cdot(1-\frac12\sqrt3)\lt15$. |
H: why is the expected value of a Wiener Process = 0?
This section of wikipedia says that the expected value of a Wiener Process is equal to 0.
Why is that?
AI: In the characterizations at Wikipedia,
W_t has independent increments with W_t−W_s ~ N(0, t−s) (for 0 ≤ s < t), .
(mean of the normal distributed increments is 0)
Lévy characterization that says that the Wiener process is an almost surely continuous *martingale with W_0 = 0*
(martingale has expected increment zero)
spectral representation as a sine series whose coefficients are independent N(0, 1) random variables.
(coefficients have a mean of zero)
scaling limit of a [symmetric] random walk,
(random walk goes up and down with equal probability) |
H: Show that there are irreducible polynomials of every degree in $\mathbb{Q}[X]$
There is this problem that I would like to ask for any verification whether my answer is correct.
Edited: Thanks @andybenji.
Show that for any $n\ge1$, there exists an irreducible polynomial $f\in\mathbb{Q}[X]$ of degree $n$.
My answer:
For degree n=0, a non-zero constant is a unit, hence it is irreducible in $\mathbb{Q}[X]$.
For all $n\ge1$, $x^n+2$ satisfies Eisenstein's Criterion with p=2, therefore it is irreducible in $\mathbb{Q}[X]$.
I am particularly doubtful about the case of degree 0. Is it correct that a non-zero constant is irreducible in $\mathbb{Q}[X]$? I saw my friend's note which says there are no irreducible polynomials of degree 0. Which one is correct?
Thanks!
AI: As I stated in the comments, the question was unclear, and a possible restatement would be
Show that, for any $n \geq 1$, there exists an irreducible polynomial $f \in \mathbb{Q}[x]$ of degree $n$.
To address the actual question, units are not irreducible. The definition of irreducible states
An element $f \in A$ is called irreducible if $f$ is not zero and not a unit, and for any expression $gh = f$, either $g$ or $h$ is a unit.
So it would, technically, be correct to say there are no irreducible polynomials of degree 0 over a field. |
H: p-norm Inequality
Let $x,y \in \mathbb{R}$, and $0<p<\infty$. Prove $$|x-y|^p\leq (1+2^p)(|x|^p+|y|^p) $$
The case $0<p\leq1$ is obvious, as it follows from the properties of the P-norm, where $P:=1/p$, $$|x-y|^p=|x-y|^{1/P}\leq |x|^{1/P} + |y|^{1/P} = |x|^{p} + |y|^{p} \leq (1+2^p)(|x|^p+|y|^p)$$. But I'm stumped for the case $1<p<\infty$. Could someone point me in the right direction? Hope I'm not missing something obvious.
AI: Thanks to @Potato's suggestion, we have $$\left|x-y\right|^p \leq \left( |x|+|y|\right)^p \leq 2^p\left(\frac{1}{2}\left|x\right|^p + \frac{1}{2}\left|2y\right|^p \right)= 2^{p-1}(|x|^p+|y|^p) \leq (1+2^p)(|x|^p+|y|^p)$$ for the case $1<p<\infty$. |
H: Matrix computation from a given matrix
I have no idea how to go about this one, any hints on how to go about this one?
$$D = \begin{bmatrix}2 & 4 & 3\\0 & 1 & 0\\1 & 3 & 1\end{bmatrix}$$
Let:
$$T(\vec x) = D \vec x$$
Compute:
$$T\left(\begin{bmatrix}3 \\0 \\ -1 \end{bmatrix}\right)$$
Image of problem
AI: I suppose that $T$ is a linear transformation and $D$ is the matrix associated to $T$ in a certain base. Then
$$
T\left(\left[ \begin{matrix}
3\\
0\\
-1
\end{matrix} \right]\right) =
\left[\begin{matrix}
2 & 4 & 3\\
0 & 1 & 0\\
1 & 3 & 1
\end{matrix}\right]
\left[\begin{matrix}
3\\
0\\
-1
\end{matrix}\right] =
\left[\begin{matrix}
3\\
0\\
2
\end{matrix}\right]
$$ |
H: Convergence of the Series - $\sum_{n=1}^{\infty} \frac{\prod_{k=1}^{n}{(3k-1)}}{\prod_{k=1}^{n}(4k-3)}$
Prove that the following series is convergent.
$$\sum_{n=1}^{\infty} \frac{\prod_{k=1}^{n}{(3k-1)}}{\prod_{k=1}^{n}(4k-3)}$$
I don't know for where to begin.
AI: Hint: The Ratio Test is perfect for this. If $a_n$ is the $n$-th term, then
$$\frac{a_{n+1}}{a_n}=\frac{3n+2}{4n+1}.$$
Detail: Note that
$$a_{n+1}=\frac{\prod_1^{n+1} (3k-1)}{\prod_1^{n+1} (4k-3)}.$$
But
$$\prod_{k=1}^{n+1}(3k-1)=\left(\prod_1^n (3k-1)\right)\left(3(n+1)-1\right),$$
and
$$\prod_{k=1}^{n+1}(4k-3)=\left(\prod_1^n (4k-3)\right)\left(4(n+1)-3\right).$$
Now when we calculate $\dfrac{a_{n+1}}{a_n}$, we see there is very nice cancellation. |
H: Explicit generators of syzygies
Consider an $1\times n$ matrix
$$
\mathbf{A}=\begin{pmatrix}
f_1 &f_2 & \dots & f_n
\end{pmatrix}
$$
over $R=\mathbb{C}[X_1,\dots,X_r]$.
Let $M=\oplus_{i=1}^n R\mathbf{e}_i$ be the rank-$n$ free $R$-module.
We have the following Koszul complex:
$$
\wedge^2 M \xrightarrow{h} M\xrightarrow{g} R
$$
where $g$ sends $\mathbf{e}_i$ to $\mathbf{A}_{1i}=f_i$ and $h$ sends $\mathbf{e}_i\wedge \mathbf{e}_j$ to $f_j\mathbf{e}_i-f_i\mathbf{e}_j$.
In some cases ($f_i$'s form a regular sequence?) the complex is exact, and hence $f_j\mathbf{e}_i-f_i\mathbf{e}_j$'s form a set of generators for $\ker(g)\subseteq M$. I also learned from the basic theory of grobner basis that when $f_i$'s are all monomials, one can modify $h$ so that it sends $\mathbf{e}_i\wedge\mathbf{e}_j$ to $\frac{\mathrm{lcm}(f_i,f_j)}{f_i}\mathbf{e}_i-\frac{\mathrm{lcm}(f_i,f_j)}{f_j}\mathbf{e}_j$ and then the sequence is exact.
Question: do we have similar results in the case that $A$ is a $m\times n$ matrix?
Specifically, for $m\leq n$, consider an $m\times n$ matrix
$$
\mathbf{A}=\begin{pmatrix}
f_{11} &\dots & f_{1n} \\
\dots & \dots & \dots \\
f_{m1} &\dots & f_{mn}
\end{pmatrix}
$$
over $R=\mathbb{C}[X_1,\dots,X_r]$. Consider the map $g$ defined by $\mathbf{A}$:
$$
g:\oplus_{i=1}^n R\mathbf{e}_i\to\oplus_{i=1}^m R\mathbf{e}'_i
$$
that sends $\mathbf{e}_i$ to $\sum_{j=1}^m f_{ji}\mathbf{e}'_j$. Can we explicitly specify elements of $\ker(g)$? When do they generate $\ker(g)$?
In the case that all $f_{ij}$'s are monomials, do we have a similar result as the one mentioned above?
Thank you very much!
AI: The proper context of this question is how to generalize the Koszul complex (or at least its second differential) for a matrix with 1 row to a matrix with an arbitrary number of rows. The generalization is provided by the Buchsbaum-Rim complex.
I will give the kernel for the generic case (when all entries are separate variables; more generally, when the ideal generated by the maximal minors of the matrix has the expected depth, which is $n-m+1$ if $n \ge m$). In all cases these will give relations, though there could be more if the depth assumption is not satisfied. A treatment of this in more detail can be found in Appendix A2.6 of Eisenbud's Commutative Algebra, or in the paper "A generalized Koszul complex. I" by Buchsbaum.
First consider the case $n=m+1$. Let $g_i$ be the $(-1)^{i+1}$ times the determinant of the matrix obtained by deleting column $i$. Then I claim $(g_1, \dots, g_n)$ is a kernel element. (In fact it's everything in the generic case.) To see this, take the $j$th row of the matrix and add it to the top. Clearly this has determinant 0, and doing Laplace expansion along this row shows that the $j$th entry of $g \cdot (g_1, \dots, g_n)^T$ is 0.
For the general case, we can do the same thing, but we get one relation for every choice of $m+1$ columns. This gives all relations in the generic case, though more work is needed to show that. So the module of relations is naturally isomorphic to an exterior power $\wedge^{m+1} R^n$. |
H: Is there a Dihedral group of order 4?
If I use the notation $D_{2n}$, then does $D_4$ make sense?
If I showed that a group $G$ is isomorphic to $H \times D_4$ where $H$ is a group, then is $G$ not a group?
I am asking this because in my other question the answerer didn't directly address my question.
AI: As $$D_{2n}=\langle x,y\mid x^n=y^2=(xy)^2=1\rangle$$ so $$D_4=\langle x,y\mid x^2=y^2=(xy)^2=1\rangle$$ so $$D_4/\langle x\rangle\cong\mathbb Z_2=\langle y\rangle$$ But $\langle y\rangle$ is normal in $D_4$ so $D_4\cong\mathbb Z_2\times\mathbb Z_2$ |
H: Can finite theory have only infinite models?
I always thought, that when creating a theory (set of formulas of predicate logic of first order in some language) and when you want to have only infinite models, you must use infinite number of axioms. That's how Peano arithmetic or ZF are made.
But when I look at Robinson arithmetic, it is finite. Am I missing something? Does Robinson arithmetic have finite models, or I was wrong - you can "enforce" only infinite models by finite theory?
AI: A theory with a finite number of axioms can easily have only infinite models. One of the simplest examples is the theory of densely ordered sets. The language (over te predicate calculus with equality) has a single binary predicate symbol $R$. For the sake of familiarity we write $X\lt y$ instead of $R(x,y)$.
The axioms say that $\lt$ is a total order, which is dense. The total order axioms say that for all $x$ and $y$, exactly one of $x\lt y$, $y\lt x$, and $x=y$ holds, and that $x\lt y$ and $y\lt z$ implies $x\lt z$. In addition to the usual axioms for total order, we ask that there is more than one element, and that for any $x$ and $y$ there is a $z$ such that $x\lt z$ and $z\lt y$. This theory only has infinite models.
Or else we can use the basic theory of the successor function. Beside the usual symbols of first-order logic with equality, the language has a constant symbol $0$, and a unary function symbol $S$. The axioms say that for all $x$, $Sx\ne x$, that $0$ is not the successor of anything, and that every $x$ except $0$ has a unique predecessor. This is far simpler than Robinson Arithmetic, but only has infinite models.
There are much more complicated examples. For instance, the von Neumann- Gödel-Bernays set theory (NBG) is finitely axiomatized. |
H: Newton's binomial problem
It is known that in the development of $(x+y)^n$ there is a term of the form $1330x^{n-3}y^3$ and a term of the form $5985x^{n-4}y^4$.
Calculate $n$.
So, I know that the binomial formula of Newton is: $\sum_{k=0}^n \binom{n}{k}a^kb^{n-k}$, but I can not understand how to establish the relationship and how to solve it.
Someone can help me to do it?
AI: Just solve for either $\binom{n}{3}=1330$ or $\binom{n}{4}=5985$. I'd go for the former one since it would end up asking the solution to a cubic polynomial equation rather than the latter which ends up in a quartic equation. Or even better,
$\dfrac{\binom{n}{4}}{\binom{n}{3}}=\dfrac{5985}{1330}\implies\dfrac{\dfrac{n!}{4!(n-4)!}}{\dfrac{n!}{3!(n-3!)}}=\dfrac{5985}{1330}\implies\dfrac{n-3}{4}=\dfrac{5985}{1330}\implies n=21$ |
H: What is computational group theory?
What is computational group theory?
What is the difference between computational group theory and group theory?
Is it an active area of the mathematical research currently?
What are some of the most interesting results?
What is the needed background to study it?
AI: There is a nice survey of the subject area available in pdf: Survey: Computational Group Theory, which while somewhat dated, gives a nice introduction to the field and provides some historical insights.
Here's a very nice Introduction to Computational Group Theory. It's a brief but fascinating survey by Ákos Seress, published in the AMS Notices (1997: 06).
See also, of course, Wikipedia: Computational Group Theory. It's not a very lengthy entry, but there are nice references provided, and links that expand a bit more on what is discussed. Wikipedia mentions two computer algebra systems: GAP and Magma, which each have links to learn more. They are incredibly useful, powerful, and time-saving systems that enriches the study of group theory. GAP is freely available from its website, and also as part of the SAGE system, which is free to download as well, but can also be used online.
References include:
Derek F. Holt $\dagger$, Bettina Eick, Bettina, Eamonn A. O'Brien, "Handbook of computational group theory", Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL, 2005.
Charles C. Sims, "Computation with Finitely-presented Groups," Encyclopedia of
Mathematics and its Applications, vol 48, Cambridge University Press,
Cambridge, 1994.
Ákos Seress, "Permutation group algorithms", Cambridge Tracts in
Mathematics, vol. 152, Cambridge University Press, Cambridge, 2003.
$\dagger$ Note that author Derek F. Holt is a regular contributer on Math.SE, and has provided a nice link in a comment below! |
H: Greatest Common Divisor of two numbers
If $ab=600$ how large can the greatest common divisor of $a$ and $b$ be?
I am not sure if I should check for all factor multiples of $a$ and $b$ for this question. Please advise.
AI: We have $600=2^3\cdot 5^2\cdot 3$. To make the gcd large, we give a $2$ to each of $a$ and $b$, also a $5$. So the largest possible gcd is $10$.
Remark: The idea generalizes. Let $n$ have prime power factorization
$$n=p_1^{d_1}p_2^{d_2}\cdots p_k^{d_k}.$$
Let $e_i=\lfloor d_i/2\rfloor$, where $\lfloor x\rfloor$ is the greatest integer that is $\le x$. Then the greatest possible gcd of $a$ and $b$, where $ab=n$, is
$$p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}.$$ |
H: How to apply De Morgan's law?
If for De Morgan's Laws
$$( xy'+yz')' = (x'+y)(y'+z)$$
Then what if I add more terms to the expression ...
$$(ab'+ac+a'c')' = (a'+b)(a'+c')(a+c)?$$
AI: We'll apply DeMorgan's "twice", actually, and get:
$$(ab'+ac+a'c')' = (ab')'(ac)'(a'c')' = (a'+b)(a'+c')(a+c)$$
So, yes, you are correct!
You can, likewise, apply (and/or "doubly apply") DeMorgan's to an indefinite number of terms, e.g.:
$$(ab + cd + ef + \cdots + yz)' = (a' + b')(c' + d')(e' + f')\cdots(y' + z')$$ |
H: Finding the closest point to the origin of $y=2\sqrt{\ln(x+3) }$
Given $$y=2\sqrt{\ln(x+3) }$$
How do I determine a (x,y) pair satisfying the above relation which is the closest to the origin (0,0)?
AI: Minimize the square of the distance from the curve to the origin (and hence, the distance from the curve to the origin), namely: $$x^2+y^2=x^2+4\ln(x+3).$$ |
H: How to show that the limit of compact operators in the operator norm topology is compact
When I read the item of compact operator on Wikipedia, it said that
Let $T_{n}, ~~n\in \mathbb{N}$, be a sequence of compact operators from
one Banach space to the other, and suppose that $T_n$ converges to $T$
with respect to the operator norm. Then $T$ is also compact.
Can anyone give me a brief proof of this? Thanks in advance.
AI: A compact operator is one which is the limit (in the norm topology) of finite rank operators. Thus by a diagonal argument, the limit of a norm-convergent sequence of compact operators can be written as the limit of a norm-convergent sequence of finite rank operators, and so is again compact.
Added: As the comment below indicate, this argument is valid in the generality of the question. But perhaps the general approach can be salvaged?
Turning to another characterization of compact operators (valid in full generality), let $T_m \to T$ be our norm convergent sequence of compact operators, and let $x_n$ be a sequence in the unit ball of their domain.
We have to find a Cauchy subsequence of $T(x_n).$ (This will show that
$T$ is compact.) We know that any subsequence of $T_m(x_n)$ ($m$ fixed)
contains a Cauchy subsequence (since $T_m$ is compact). We know that
$T_m(x_n)$ ($n$ fixed) converges to $T(x_n)$, and this convergence is
uniform in $n$ (since $|| T(x_n) - T_m(x_n) || \leq || T - T_m|| || x_n||
\leq || T - T_m||,$ which $\to 0$ as $n \to \infty$).
Now a diagonal argument will let us find a Cauchy subsequence of $T(x_n)$.
More precisely, let $x_{n_{1,i}}$ be a subsequence of $x_n$ such that $T_1(x_{n_{1,i}})$ is Cauchy. Pass to subsequences inductively as follows: assuming that we have chosen the subsequence
$x_{n_{m,i}}$, take $x_{n_{m+1,i}}$ to be a subsequence of $x_{n_{m,i}}$ such that $T_{m+1}(x_{n_{m+1,i}})$ is Cauchy.
Now define $x_{n_i} = x_{n_{i,i}}$. Then $T_m(x_{n_i})$ is Cauchy for every $m$, and you can deduce from this that $T(x_{n_i})$ is Cauchy. |
H: Every primitive matrix is irreducible?
$A$ is reducible if there is some permutation matrix $P$ such that
$$ PAP^T =
\begin{bmatrix}
B & C \\
O & D \\
\end{bmatrix}
$$
And, if $A^k > O$ for some k, then $A$ is called primitive.
Then, how can I show that every primitive matrix is irreducible?
AI: Your definition of primitive matrix is wrong. A primitive matrix is a nonnegative matrix $A$ such that $A^k>0$ for some natural integer $k$. You cannot remove the nonnegativity requirement on $A$ and the positivity requirement on $k$.
If $A$ is reducible, then it is, by definition, permutation-similar to a block upper triangular matrix $M$. Since $M^k$ is block upper triangular for all $k\in\mathbb{N}$, it always contains a zero block. Yet $M^k$ is permutation-similar to $A^k$. So, $A^k$ always contains a zero entry and hence it is not primitive. |
H: Support vs range of a random variable
Is there any difference between the two? I have not met any formal definition of the support of a random variable. I know that for the function $f$ the support is a closure of the set $\{y:\;y=f(x)\ne0\}$.
AI: The support of the probability distribution of a random variable $X$ is the set of all points whose every open neighborhood $N$ has the property that $\Pr(X\in N)>0$.
It is more accurate to speak of the support of the distribution than that of the support of the random variable.
The complement of the support is the union of all open sets $G$ such that $\Pr(X\in G)=0$. Since the complement is a union of open sets, the complement is open. Therefore the support is closed. |
H: Conditional Expected Value and distribution question
The distribution of loss due to fire damage to a warehouse is:
$$
\begin{array}{r|l}
\text{Amount of Loss (X)} & \text{Probability}\\
\hline
0 & 0.900 \\
500 & 0.060 \\
1,000 & 0.030\\
10,000 & 0.008 \\
50,000 & 0.001\\
100,000 & 0.001 \\
\end{array}
$$
Given that a loss is greater than zero, calculate the expected amount of the loss.
My approach is to apply the definition of expected value:
$$E[X \mid X>0]=\sum\limits_{x_i}x_i \cdot p(x_i)=500 \cdot 0.060 + 1,000 \cdot 0.030 + \cdots + 100,000 \cdot 0.001=290$$
I am off by a factor of 10--The answer is 2,900. I am following the definition of expected value, does anyone know why I am off by a factor of $1/10$?
Should I be doing this instead???
$E[X \mid X>0] = \sum\limits_{x_i} (x_i \mid x_i > 0) \cdot \cfrac{\Pr[x_i \cap x_i>0]}{\Pr(x_i > 0)}$
Thanks.
AI: You completely missed the word "given". That means you want a conditional probability given the event cited. In other words, your second option is right.
For example the conditional probability that $X=500$ given that $X>0$ is
$$
\Pr(X=500\mid X>0) = \frac{\Pr(X=500\ \&\ X>0)}{\Pr(X>0)} = \frac{\Pr(X=500}{\Pr(X>0)} = \frac{0.060}{0.1} = 0.6.
$$ |
H: Concrete Mathematics Iversonian Set Relation Clarification
Sorry for asking a very dumb question, but in Concrete Mathematics(Graham,Knuth,Patashnik), chapter 2 section 4, Knuth talks about this formula called "Rocky Road".
This is the formula to use when you want to interchange the order of summation of a double sum whose inner sum's range depends on the index variable of the outer sum, like this:
\begin{equation}
\displaystyle\sum\limits_{j = 1}^{n}{\displaystyle\sum\limits_{k = j}^{n}{a_j,_k}}
\end{equation}
The rocky road formula is as follows:
\begin{equation}
\displaystyle\sum\limits_{j \in J}^{}{\displaystyle\sum\limits_{k \in K(j)}} = \displaystyle\sum\limits_{k \in K'}{\displaystyle\sum\limits_{j \in J'(k)}}
\end{equation}
With the requirement that the sets $J,K(j),K', \text{and} J'(k)$ be related in such a way that:
\begin{equation}
[j \in J][k \in K(j)] = [k \in K'][j \in J'(k)]
\end{equation}
My understanding of this, is that the set $K'$ is basically the "bounds" that $k$ has, sort of like its restrictions. For the first sum I wrote, I would think that K' be the set {j,j+1,...,n} and since j starts at 1, K' is really just the set of the values $1 \rightarrow n$. However, I can't really figure out $J'(k)$ here , or even if my understanding of $K'$ is right. Can anyone give me some pointers or put me in the right track as to understanding this set relation?
Thanks,
AI: The requirement is basically saying that you must sum over the same set of numbers $a_{j,k}$ on both sides: if a particular pair $(j, k)$ occurs somewhere on the left-hand side, then $(k, j)$ must occur somewhere on the right-hand side, and vice-versa.
Looking at some simple examples might help.
$$\sum_{j=1}^{n} \sum_{k=1}^{n} a_{j,k} = \sum_{k=1}^{n} \sum_{j=1}^{n} a_{j,k}$$
It should be clear to you that the above equation holds: Here, you're summing over all numbers $a_{j,k}$ such that $1 \le j \le n$ and $1 \le k \le n$. In terms of the notation, $J = \{1, 2, \dots, n\}$, $K(j) = \{1, 2, \dots, n\}$, $K' = \{1, 2, \dots, n\}$, and $J'(k) = \{1, 2, \dots, n\}$ (all the same).
$$\sum_{j=1}^{n} \sum_{k=j}^{n} a_{j,k} = \sum_{k=1}^{n} \sum_{j=1}^{k} a_{j,k}$$
Here, if you view the numbers $a_{j,k}$ as being laid out in an $n \times n$ matrix, then on the left hand side, you're summing over the "upper triangle" (numbers on or above the diagonal) of the matrix: for each row ($j$), you add up numbers from only those columns ($k$) that occur at $j$ (on the diagonal) or more (to the right). If you try to describe this "upper triangle" write it by columns instead, then for each column $k$, you must add up the numbers from the rows $j$ that either occur on the diagonal (so, at $k$) or above it (so, smaller $j'$). That gives the right-hand side.
How the notation helps here (or may help) is that you can say $[j \in J] = [1\le j \le n]$, and $[k \in K(j)] = [j\le k \le n]$, so after taking $K' = \{1, 2, \dots, n\}$ so that $[k \in K'] = [1 \le k \le n]$, you're trying to find $J'(k)$ such that
$$\begin{align}
[k \in K'][j \in J'(k)] &= [j \in J][k \in K(j)] \\
[1 \le k \le n][j \in J'(k)] &= [1 \le j \le n][j \le k \le n] = [1 \le j \le k \le n]\\
\end{align}$$
which says you must take $[j \in J'(k)] = [1 \le j \le k]$ or $J'(k) = \{1, 2, \dots, k\}$. Thus the sum on the right-hand side.
Whether the notation helps you more than thinking directly, varies from person to person and depends on you.
A final example, slightly less trivial, which you can work out for yourself:
$$\sum_{j=1}^{n} \sum_{k\text{ divides }j} a_{j, k} = \sum_{k=1}^{n} \sum_{\substack{j\text{ is a multiple of }k\\\text{and }j \le n}} a_{j, k}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.