text
stringlengths 83
79.5k
|
---|
H: Is it possible to use the Central limit theorem for standard Cauchy distribution?
I have a simple question about CTL (Central limit theorem)
Is it possible to use the Central limit theorem for standard Cauchy distribution?
I think that it´s not possible because the Mean of the standard Cauchy distribution is undefined and the variance it´s the same.
Is it true??
Thanks for your help, have a nice day
AI: Let $X_1,X_2,\dots,X_n$ be independent identically distributed standard Cauchy random variables. Then $\frac{X_1+\cdots+X_n}{n}$ has standard Cauchy distribution. So the Central Limit Theorem fails spectacularly. |
H: Recurrence relations problem (linear, 2nd order, constant coeff, homogeneous)
I'm currently stonewalled on this problem, in which I have to solve the following recurrence relation and prove my answers satisfy the recurrence.
My boundary conditions are $a_0 = 1$ and $a_1 = 9$.
The problem is $\forall\ n \in N$, $a_{n+2}-18a_{n+1}+9a_{n}=0$
I ended up with the sequence proceeding as $1, 9, 153, 2673, 46737$ for $a_{0}, a_{1}, a_{2}, a_{3},$ and $a_{4}$ respectively but I can't actually figure out the pattern that would give me a solution. Help?
AI: There is a standard procedure for dealing with such problems. One looks for solutions of the form $r^n$.
Substituting in the recurrence, we get, if I am reading the coefficients correctly, the equation $r^{n+2}-18r^{n+1}+9r^n=0$. On the assumption $r\ne 0$, we get $r^2-18r+9=0$, which has the solutions
$$r=9\pm6\sqrt{2}.$$
Let $\alpha$ and $\beta$ be these solutions of the quadratic equation. It is easy to verify that for any constants $A$ and $B$, $A\alpha^n+B\beta^n$ is a solution of the recurrence.
From the initial conditions, you can calculate what $A$ and $B$ must be. (That part is left to you.)
So for the $A$ and $B$ that you found, if we put $a_n=A\alpha^n+B\beta^n$, the recurrence and initial conditions are satisfied. But it is easy to prove by induction that there is only one solution to the recurrence with initial conditions. So we must have found it.
Remark: The method described above can be used for all linear homogeneous recurrences with constant coefficients. There is a small complication when the equation we get has roots of multiplicity greater than $1$. |
H: Probability Density Function of a Minimum Function
Suppose $U_1, U_2, \dots, U_5$ are independent $\operatorname{unif}(0,1)$ random variables.
Suppose $T = \min(U_1, U_2, \dots, U_5)$.
How would I find the p.d.f. of $T$? I know how to do regular cases like $T = A + B$ given the distribution of $A$ and $B$ but how do I deal with a minimum?
AI: We find the cumulative distribution function $F_T(t)$ of $T$. Given any number $t$, we need to find $\Pr(T\le t)$. It is easier to find first $\Pr(T\gt t)$.
The minimum of the $U_i$ is $\gt t$ precisely if all the $U_i$ are greater than $t$.
For $0\le t\le 1$, we have $\Pr(U_i\gt t)=1-t$.
So the probability all the $U_i$ are $\gt t$ is, by independence, $(1-t)^5$.
It follows that
$$\Pr(T\le t)=F_T(t)=1-(1-t)^5,$$
for all $t$ between $0$ and $1$. For the probability density function $f_T(t)$, differentiate $F_T(t)$.
Remark: The same basic idea can be used to find the distribution of the minimum of $n$ independent random variables with continuous distribution. A simpler variant deals with the distribution of the maximum. |
H: Interval of convergence of $\sum\limits_{n\geq0} \binom{2n}{n} x^n$
We consider the power series $\displaystyle{\sum_{n\geq0} {2n \choose n} x^n}$. By Ratio Test, the radius of convergence is easily shown to be $R=\frac{1}{4}$.
For $x=\frac{1}{4}$, Stirling equivalent and Ratio Test imply that the series is divergent. For $x=\frac{-1}{4}$, Stirling equivalent and Alternating Series Test can be used to show the convergence of the series. Thus the interval of convergence is $\left[\frac{-1}{4},\frac{1}{4}\right)$.
My question: is there a (preferably simple ^^) method to determine the interval of convergence without using equivalents? My students don't know about equivalents.
AI: Let $a_n=\frac{1}{4^n}\binom{2n}{n}$. Then
\begin{eqnarray}
a_n&=&\frac{(2n)\cdot(2n-1)\cdot\dots\cdot 2 \cdot 1}{(2n)^2 \cdot (2n-2)^2 \cdot \dots \cdot 2} \\
&=&\frac{(2n-1) \cdot (2n-3) \cdot \dots \cdot 3 \cdot 1}{(2n) \cdot (2n-2) \cdot \dots \cdot 4 \cdot 2} \\
&=& \prod_{k=1}^n\left(1-\frac{1}{2k}\right)\\
&<& \prod_{k=1}^n \sqrt{\left(1-\frac{1}{2k+1}\right)\left(1-\frac{1}{2k}\right)}\\
&=&\sqrt{\frac{(2n)!}{(2n+1)!}}\\
&=&\sqrt{\frac{1}{2n+1}} \, .
\end{eqnarray}
So $a_n$ tends to $0$ as $n \to \infty$; as $a_n$ is clearly monotone, the alternating series test proves convergence for $x=-1/4$.
You can make a similar comparison to show that $a_n>\sqrt{\frac{1}{4n}}$, which proves divergence for $x=1/4$. |
H: Uniform convergence of functions and intervals
We define $f_n:\mathbb{R}\to\mathbb{R}$ by $f_n(x)=\dfrac{x}{1+nx^2}$ for each $n\ge 1$.
I compute that $f(x):= \displaystyle\lim_{n\to \infty}f_n(x) = 0$ for each $x\in\mathbb{R}$.
Now, I want to know in which intervals $I\subseteq \mathbb{R}$ the convergence is uniform.
Any hint? Thanks.
AI: If you use the technique, then the max over $x\in(-\infty,\infty)$ of the function is achieved at $x=\frac{1}{\sqrt{n}}$ and it equals $\frac{1}{2\sqrt{n}}$. So, we have
$$ \sup| f_n(x)-f(x) |= \sup \Big| \dfrac{x}{1+nx^2} \Big|= \frac{1}{2\sqrt{n}} < \epsilon.$$
This shows the sequence converges uniformly over $\mathbb{R}$.
Added: Here is the plot of the function for $n=1,2,3$. |
H: Why do we use the inverse conversion formula to convert slope per radians to slope per degrees
This is a contribution question I'm making in hopes that others may benefit. I will provide my answer underneath. Initially I wanted to ask this question, but I solved it myself and I'd like to give back for the question I asked earlier. I will wait a day before selecting an answer in case anyone else wants a crack at it.
I asked a question earlier here about why I was getting a different answer when calculating the tangent slope of a line by using the secant method compared to using the Calculus method. I found out that the Calculus method always produces an answer relative to radian measure.
So the two answers I had were actually the same, except one was slope per radians and one was slope per degrees.
Why is it that when we convert radians to degrees we multiply radians $\ *\frac{180}{\pi}\ $, but when we convert slope per radians to slope per degrees we have to multiply the inverse conversion formula slope per radians$\ *\frac{\pi}{180}\ $?
I should give an example to explain better.
Given r radians, we get degrees by $\ r * \frac{180}{\pi}\ $
Given d degrees, we get radians by $\ d * \frac{\pi}{180}\ $
However, the opposite is true when converting slope per radians and slope per degrees:
Given s slope per radians, we get slope per degrees by $\ s * \frac{\pi}{180}\ $
Given t slope per degrees, we get slope per radians by $\ t * \frac{180}{\pi}\ $
It seems illogical that we convert slope per radians to slope per degrees by using the formula which converts degrees to radians. Since it's already in radian form?
AI: Why is it that when we convert radians to degrees we multiply radians $\times \frac{180}π$ , but when we convert slope per radians to slope per degrees we have to multiply the inverse conversion formula slope per radians $\times \frac{π}{180}$
If we want to know an hour in terms of minutes, we multiply 1 hour $\times \frac{60}{1}$, given the result in minutes. If we want to know how convert 180 minutes, we divide $180$ minutes by $60$, i.e., multiply $180 \times \frac 1{60}$.
You'll find this phenomenon in any conversion: To convert temperature in degrees Celsius to temp in Fahrenheit, we have $F = \frac 95 C + 32$. To convert to from F to C, we need to invert this: $C = \frac 59(F-32)$
$R \text{ radians}\;\times \dfrac{180^\circ}{\pi \;\text{radians}} = \dfrac{R\times 180^\circ}{\pi}$.
"Radians" cancels as the unit, leaving a numeric value expressed in degrees.
$D \text{ degrees}\; \times \dfrac{\pi \;\text{radians}}{180\; \text{degrees}} = \dfrac{D\pi\;}{180}\;\text{ radians}$.
"Degrees" cancels as the unit, leaving the value expressed in radians.
Moved from comments:
Note that slope per radian is a ratio: $\;\dfrac{\text{slope}}{\text{radians}}.\;$ So to obtain "slope per degree", you need to have $π$ radians in the numerator, to cancel the unit "radians" from the denominator, and $180$ degrees in the denominator, to end with slope/degrees.
Slope itself is not a "unit" per se, meaning it isn't a degree, or radian, a mm, or foot. It is unit-free: even if we assign distance units (meters, say) to displacement: e.g. $Δ\,y\text{m}=y_2\,\text{m}−y_1\,\text{m}$ and $Δ\,x\,\text{m}=x_2\text{m}−x_1\,\text{m},$ then we have $$\text{slope}\,= \dfrac{Δy\,\text{ m}}{Δx\,\text{m}}=\frac{Δy}{Δx},$$ you see that the units "m: meter" attached to "change in y" and "change in x" cancel in the ratio defining slope, leaving us with a unit-free scalar which slope really is. |
H: Is the curl of every non-conservative vector field nonzero at some point?
Counterexamples? Intuitively, why?
Thanks for any answers.
As a side note, in what math class are gradient, divergence and curl taught typically?
AI: The question to ask is:
If there is a smooth vector field $\mathbf{v}$ such that
$$\nabla \times \mathbf{v} = 0 \quad \text{ in } \Omega,$$
is there a smooth function $\phi$ such that
$$\mathbf{v} = \nabla\phi?$$
If the domain is simply-connected (for example, a sphere, a box, but not cup with a handle), then this field must be a conservative field, i.e., $\mathbf{v} = \nabla \phi$.
In general, we can have non-conservative field having zero curl. Let
$$Z = \mathrm{ker}(\nabla\times) := \{\mathbf{w} \text{ is smooth}: \nabla \times \mathbf{w} = 0\},$$
which is the space of the fields having zero curl and
$$B = \mathrm{im}(\nabla) = \{\nabla \psi: \psi \text{ is smooth}\},$$
which is the range of the gradient operator (i.e., all gradient fields). Then
$$
\mathrm{dim}\big(Z/B\big) = \beta_1\tag{1}
$$
where $\beta_1$ is the first Betti number of the domain of interest $\Omega$, and
$$\beta_1 = (\# \text{ of holes in the domain})$$
roughly speaking. (1) essentially means:
$$
\text{Field whose curl is zero} = \text{Conservative vector field } + \text{"Something"}. \tag{2}
$$
This "something" here is a $\beta_1$-dimensional space.
(1) is the by the coincidence of the dimension of the de Rham cohomology group and homology group. For simply connected domain $\beta_1 =0$. If the domain has $k$ holes, the difference is a $k$-dimensional space.
Intuition: To be honest I don't know the intuition here either, if someone knows please enlighten me as well.
How to find this "something": Let's use that example in the wiki's entry of conservative fields AWertheim mentioned in the first comment, but modify it a little bit. For an infinite cylinder $C_{x=y=0,z\in (-\infty,\infty)}(1)$ with radius 1 in $\mathbb{R}^3$ that contains the origin, we dig a hole of radius $\epsilon$ along $z$-axis:
$$\Omega = C_{x=y=0,z\in (-\infty,\infty)}(1)\backslash C_{x=y=0,z\in (-\infty,\infty)}(\epsilon)$$
Then by above, that "something" is a dimension 1 space. Let $\mathbf{A}$ be a member of this something, we can pose a boundary value problem which is the technique used in Helmholtz decomposition of vector fields:
$$\left\{
\begin{aligned}
\nabla \times \mathbf{A} &= 0\quad \text{ in }\Omega
\\
\nabla \cdot \mathbf{A} &= 0 \quad \text{ in }\Omega
\\
\frac{1}{2\pi}\oint_{\gamma_C} \mathbf{A} \cdot d\mathbf{s} &= 1
\\
\mathbf{A} \cdot \mathbf{n} &= 0 \quad \text{ on }\Gamma,
\end{aligned}
\right.$$
where $\gamma_C $ is a counterclockwise closed curve lived on that interior cylinder surface with winding number 1, and $\Gamma$ is the boundary of the exterior cylinder. We can find that this $\mathbf{A}$ is what wikipedia's entry has:
$$
\mathbf{A} = \left( \frac{-y}{x^2+y^2}, \frac{x}{x^2+y^2}, 0 \right).
$$
That "something" in (2) is just a multiple of $\mathbf{A}$. And now we can say that if $\nabla \times \mathbf{v} = 0$ in $\Omega$ above, then
$$
\mathbf{v} = \nabla \phi + c\mathbf{A},
$$
where $c$ is a constant.
Relevant question: What is the solution to Nash's problem presented in "A Beautiful Mind"?
Your last question: You can learn div, curl, and grad in Calculus III I believe. Like Green theorem, Divergence theorem, and Stokes theorem. |
H: Question on $\mbox{Ext}^1$
I have 2 questions, one of them concerning the isomorphicity of quotient groups (rings), and the other is on $\mbox{Ext}^1$. It's pretty long, but somehow related to each other. So I just kinda put everything here, instead of dividing it into different posts.
Question 1
I know that given $A; B$ subgroups of an Abelian group $G$, then if $A \cong B$, then it's not necessarily true that $R/A \cong R/B$ (I can find a counter example of this). But is the reverse true? Like if $R/A \cong R/B$ then is true that $A \cong B$? I think that it's false, but I cannot find a counter example of it. :(
And moreover, how can I find counter examples of the above 2 statements in quotient rings? I've been searching for a counter example in common rings, like $\mathbb{Z}, \mathbb{Q}, \mathbb{Z}[x]$ but with no luck.
Question 2
So basically, I understand that, if you are trying to prove something of the line $X/X' \cong Y/Y'$, then showing that $X\cong X'$, and $Y \cong Y'$ is definitely not enough. Ok, here's my second question.
Given a ses of $R-$modules $0 \to M \xrightarrow{f} P \xrightarrow{g} X \to 0$, where $P$ is projective, and an $R-$module $Y$.
Here's a proof of $\mbox{Ext}^1 (X;Y) \cong \mbox{Hom}(M;Y)/\mbox{Im}(\mbox{Hom}(f;1_Y))$, that I don't really understand.
Firstly, construct a projective solution for $M$, i.e:
$$... \xrightarrow{\delta_2} P_1 \xrightarrow{\delta_1} P_0 \xrightarrow{\delta_0} M \to 0$$
Then, we can connect the above les with the ses as given to arrive at a projective resolution for $X$ as follow.
$$... \xrightarrow{\delta_2} P_1 \xrightarrow{\delta_1} P_0 \xrightarrow{f\delta_0} P \xrightarrow{g} X \to 0$$
Now, drop the $X$ to obtain the following complex:
$$... \xrightarrow{\delta_2} P_1 \xrightarrow{\delta_1} P_0 \xrightarrow{f\delta_0} P \to 0$$
Apply $\mbox{Hom}(-; Y)$ to the above complex to have yet another complex:
$$... \xleftarrow{\mbox{Hom}(\delta_2; 1_Y)} \mbox{Hom}(P_1; Y) \xleftarrow{\mbox{Hom}(\delta_1; 1_Y)} \mbox{Hom}(P_0; Y) \xleftarrow{\mbox{Hom}(f\delta_0; 1_Y)} \mbox{Hom}(P; Y) \leftarrow 0$$
Now by definition, we have: $\mbox{Ext}^1(X;Y) = \mbox{ker} \ \mbox{Hom}(\delta_1; 1_Y) / \mbox{im} \ \mbox{Hom}(f \delta_0; 1_Y)$ (*)
Now, we'll try to express $\mbox{ker} \ \mbox{Hom}(\delta_1; 1_Y)$ in other terms.
Since $P_1 \xrightarrow{\delta_1} P_0 \xrightarrow{\delta_0} M \to 0$ is exact, apply $\mbox{Hom} (-; Y)$ to this sequence, to have an exact sequence: $\mbox{Hom}(P_1; Y)\xleftarrow{\mbox{Hom}(\delta_1; 1_Y)} \mbox{Hom}(P_0; Y) \xleftarrow{\mbox{Hom}(\delta_0; 1_Y)} \mbox{Hom}(M; Y) \leftarrow 0$, which means that: $\mbox{ker} \ \mbox{Hom}(\delta_1; 1_Y) = \mbox{im} \ \mbox{Hom}(\delta_0; 1_Y) \cong \mbox{Hom} (M; Y)$ (since $\mbox{Hom}(\delta_0; 1_Y)$ is injective).
And $\mbox{im} \ \mbox{Hom}(f \delta_0; 1_Y) = \mbox{Hom}(\delta_0; 1_Y)\left( \mbox{Hom}(f; 1_Y)(\mbox{Hom}(P; Y))\right) \cong \mbox{Im} \ \mbox{Hom}(f; 1_Y)$ (since $\mbox{Hom}(\delta_0; 1_Y)$ is injective).
And then, the author just kind of plug everything in (*), to have:
$$\mbox{Ext}^1(X;Y) = \mbox{ker} \ \mbox{Hom}(\delta_1; 1_Y) / \mbox{im} \ \mbox{Hom}(f \delta_0; 1_Y) \cong \mbox{Hom} (M; Y) / \mbox{Im} \ \mbox{Hom}(f; 1_Y)$$
This proof doesn't look good to me, since AFAIK we cannot prove isormorphicity of quotient modules like that. Or am I missing something here? If this proof is wrong, is there any way that I can fix it?
Thank you guys very much,
And have a good day, :*
AI: Q1: Let $G = \mathbb{Z} = A$, $B = 2 \mathbb{Z} \cong \mathbb{Z}$. $G/A = 0$ but $G/B = \mathbb{Z}/2\mathbb{Z}$ (Edit: So this is an example that $A\cong B$ but $G/A \ncong G/B$, not what the OP was asking about (Thanks @Alex for pointing this out). I will leave this here for reference since I think it is a good example to keep in mind.)
Q2: Here is the way you want to prove this. Lok at the long exact sequence for $\text{Ext}^1$: Given a short exact sequence $0 \to M \to P \to X \to 0$ the Ext long exact sequence (the cohomology long exact sequence associated to the right derived functor $\text{Ext}^*(-,Y)$) is:
$$ 0 \to \text{Hom}(X,Y) \to \text{Hom}(P,Y) \stackrel{f^*}{\to} \text{Hom}(M,Y) \stackrel{\delta}{\to} \text{Ext}^1(X,Y) \to \text{Ext}^1(P,Y) = 0$$
Hence by exactness, $\text{Ext}^1(X,Y) \cong \text{Hom}(M,Y)/f^*(\text{Hom}(P,Y) )$ as required.
All of the stuff in the proof you provided kinda looks like material used to show that $\text{Ext}^*(-,Y)$ is a (cohomological) $\delta$-functor. |
H: Exercise 28 of Chapter 1 in Stein-Shakarchi's Real Analysis
I am doing an exercise [similar to] Exercise 28 in Chapter 1 of Stein-Shakarchi's Real Analysis (page 44):
For any $E \subseteq \Bbb{R}^d$, $0 < \alpha <1$, we can find an open set $O \supseteq E$ such that $m_\ast(E) > \alpha m_\ast (O)$ where $\ast$ means the exterior measure.
I am not sure how to say this but am I correct when I say that for all $\epsilon > 0$ there is $O$ open such that $m_\ast(O) - m_\ast(E) < \epsilon$. But from here how to get the result they claim?
Also saying that for all $\epsilon > 0$ there is $O$ open such that $m_\ast(O) - m_\ast(E) < \epsilon$ is not the same as saying there is $O$ open such that $m_\ast(O - E) \leq \epsilon$ yes?
[Added by @Jack, 6/29/19] The problem in OP is not the original exercise in the book:
AI: Just choose $ 0 <\epsilon < m_*(E)(1-\alpha)/\alpha $, then you have $ \alpha (m_*(E)+\epsilon)<m_*(E) $, Now there exists open $ O \supseteq E $ such that $ m_*(O) < m_*(E) + \epsilon $, hence
$$ \alpha m_*(O) < \alpha(m_*(E)+\epsilon) \leq m_*(E) $$ |
H: Infinitely many zeros of a nonconstant continuous function?
Let $f:[0,1]\to\mathbb{R}$ be a nonconstant continuous function. Is $S=\{x: f(x)=0\}$ finite?
I have thought of a function with countably many $0$'s like lots of triangular bumps at each point $\{1/n\}$, I mean lots of $W/M$ shapes on $[0,1]$. Is it okay?
AI: Your suggestion will work well. If you want an explicit formula, look at $f(x)=x\sin(1/x)$ when $x\ne 0$, and $f(0)=0$. |
H: $\{\cos n+\sin n\}$ have a convergent subsequence?
Does the $\{\cos n+\sin n\}$ have a convergent subsequence?
I am totally clueless.
AI: HINT: It’s a bounded sequence: $-2\le\cos n+\sin n\le 2$ for all $n$. That should be enough, but if not, a further hint is spoiler-protected below.
See the Bolzano-Weierstrass theorem. |
H: If $f \circ g = f$, prove that $f$ is a constant function.
Suppose $A$ is a nonempty set and $f: A \rightarrow A$ and for all $g:A \rightarrow A,$ $f \circ g = f$. Prove that $f$ is a constant function.
This result seems obvious, but I can't seem to find a way to prove it. The book I got this problem from hinted that the reader should consider what would happen if $g$ were a constant function. I considered that case and it was easy to prove that $f$ was a constant function, but I can't find any way to prove it when $g$ is not a constant function. At this point I'm not really sure how to approach this problem anymore. Thanks in advance for any help!
AI: The assumption is that $f\circ g=f$ for all $g:A\to A$. Thus, you can pick any $a_0\in A$ and let $g:A\to A:a\mapsto a_0$ and know that $$f(a)=f\big(g(a)\big)=f(a_0)$$ for all $a\in A$. This already tells you that $f$ is constant: it assigns the value $f(a_0)$ to every $a\in A$. You don’t need to consider any other functions $g:A\to A$. |
H: two sequences asymptotic, then their differences go to zero
One last problem before I go to sleep, I must be too tired to see this one. This is another qual type question.
Let $\{x_{n}\}$ and $\{y_{n}\}$ be sequences of real numbers such that $\forall n \in \mathbb{N}$
$y_{n} \neq 0$ and $\frac{x_{n}}{y_{n}} \rightarrow 1$.
Show if either sequence is bounded, that $(x_{n}-y_{n}) \rightarrow 0$
I'm probably the most unclear as to where boundedness comes into this problem, even if I suppose from the get go that one of them is bounded arbitrarily.
I have so far that $\left| \frac{x_{n}}{y_{n}}-1\right|$=$\left|\frac{x_{n}-y_{n}}{y_{n}}\right|$, and I feel like I'm halfway there because the numerator of that expression is exactly what I want, however the denominator is a little pesky.
AI: You're almost there. Now, just take $-M \le y_n \le M$.
$$0 \le \frac{|x_n - y_n|}{M} \le \frac{|x_n - y_n|}{|y_n|} \to 0$$ |
H: Computing intersection of two subspaces of $C^{\infty}_{2\pi}(\mathbb{R},\mathbb{R})$.
I've been thinking of the following two subspaces of $C^{\infty}_{2\pi}(\mathbb{R},\mathbb{R})$:
$$
A=\{a_1\sin(t)+a_2\sin(2t)+a_3\sin(3t):a_1,a_2,a_3\in\mathbb{R}\}
$$
and
$$
B=\{b_1\sin(t)+b_2\sin^2(t)+b_3\sin^3(t):b_1,b_2,b_3\in\mathbb{R}\}.
$$
I'm wondering if there's an explicit description of $A\cap B$?
Using double and triple angle identities, I deduce
$$
a_1\sin(t)+a_2\sin(2t)+a_3\sin(3t)=(a_1+2a_2\cos(t)+3a_3)\sin(t)-4a_3\sin^3(t).
$$
The Wronskian of $\{\sin(t),\sin^2(t),\sin^3(t)\}$ is $2\sin^3(t)\cos^3(t)$, so they are linearly independent on $\mathbb{R}$. This leads me to suspect that $A\cap B=\langle \sin(t),\sin^3(t)\rangle$, but the pesky $\cos(t)$ in the coefficient of $\sin(t)$ above is bugging me. Is my hunch correct?
AI: Your deduction shows that $\sin(t)$ and $\sin^3(t)=\frac34\sin(t)-\frac14\sin(3t)$ are elements of $A\cap B$, so the dimension of $A\cap B$ is at least $2$. Thus either in fact $$A\cap B=\{a_1\sin(t)+a_3\in(3t):a_1,a_3\in\mathbb R\}$$ or $A=B$. But $A=B$ cannot hold as $B$ contains the even function $\sin^2(t)$, whereas all nonzero elements of $A$ are odd functions. (Alternatively for the last step: $\int_0^{2\pi}f(x)\,\mathrm dx=0$ for all $f\in A$, but $\int_0^{2\pi}\sin^2(x)\,\mathrm dx=\pi$) |
H: Application of Green's Theorem
I know this is a really basic question, but I seem to be kind of rusty.
$C$ is the boundary of the circle $x^2+y^2=4$
$$\int_C y^3dx-x^3 dy = \int_A -3x^2-3y^2 dA= \int_0^{2 \pi} \int_0^2 -3 r^2 r dr d \theta = -12 \pi$$
Did I make a mistake? My book says it's $-24 \pi$
AI: As noted in the comments, the last calculation is incorrect:
$$\int_0^2 -3r^2 rdr = -3 \int_0^2 r^3 dr = -\frac{3 r^4}{4} \Huge{\vert_{\small 0}^{\small 2}}$$
This is equal to $-12$. So your integral is $\int_0^{2\pi} -12 d\theta = -24\pi$. |
H: does $\intop_{1}^{\infty}x\sin(x^{3})dx$ really converge?
I'm trying to find a continuous function $f(x)$ on $[0,\infty)$ such that:
$\intop_{1}^{\infty}f(x)dx$ converges while $f(x)$ isn't bounded.
I came up with $f(x)=x\sin(x^{3})dx$, as a function which oscillates like crazy when x tends to infinity, and much faster than x, which is the direction IMO.
Wolfram says it converges, and plugging big numbers shows Cauchy's criterion holds, but I wasn't able to rigorously prove the convergence.
A few questions:
Is there a "nice" way of showing this integral converges?
(general question) is Wolfram's numeric approximation always positive?
is the claim actually true (there exists a function which has an improper integral but isn't bounded)?
Many thanks!
AI: You can integrate by parts with $u=\dfrac{1}{x}$ and $dv = x^2\sin(x^3)dx$. You'll get $\frac13\cos(1)$ plus an obviously absolutely convergent integral. |
H: Does this function have a (global) minimum?
A good day to everyone.
Does the following function have a (global) minimum:
$$1 + \frac{1}{x} + {\left(1 + \frac{1}{x}\right)}^\theta,~~x\in\mathbb R$$
where
$$\theta = {\displaystyle\frac{3\log 2 - \log 5}{2(\log 5 - 2\log 2)}} > 1?$$
WolframAlpha says it has none.
AI: Lets do some analysis and see if we can convince ourselves that WA is correct.
A plot of the function (be careful with the imaginary values for $x \lt 0$), shows:
We find that we have a limit of $2$ as $x \rightarrow \pm \infty$ (of course, we also see this in the plot), but this does not help us. This is our infimum.
Taking the derivative yields:
$$\displaystyle f'(x) = -\frac{x \ln(8/5) (1/x+1)^{(\ln(8/5)/\ln(25/16))}+2 x \ln(5/4)+2 \ln(5/4)}{2 x^2 (x+1) \ln(5/4)}$$
If we plot $f'(x)$, we have:
The limit of the derivative is zero as $x \rightarrow \pm \infty$, but this does not help us.
Analytically or numerically, we can find no value of $x$ where $f'(x) = 0$, that is, we can find no critical points.
Conclusion, there is no global minimum (or maximum or local ones either). |
H: Modules decomposition into indecomposables
I think it's not true that every module (over arbitrary ring) is the sum of indecomposable modules, but I can't find counterexample and literature about this problem. Can anyone help me?
Also I have similar question: is every abelian group is quasiisomorphic to the direct sum of strongly indecomposable abelian groups? The help in this question (counterexamples, literature) is also needed.
AI: There is such a counterexample in
L.Fuchs, Infinite Abelian Groups, Chap.XIII, Theorem 91.5:
"There is a countable group which has not any non-zero indecomposable direct summand." |
H: Prove that If $f$ is polynomial function of even degree $n$ with always $f\geq0$ then $f+f'+f''+\cdots+f^{(n)}\geq 0$.
I can't solve this problem:
Suppose $f$ is polynomial function of even degree $n$ with always $f\geq0$.
Prove that
$f+f'+f''+\cdots+f^{(n)}\geq 0$.
AI: Let $g=f+f'+\cdots+f^{(n)}$ and let $h(x)=e^{-x}g(x)$. Note that $f^{(n+1)}=0$, so
$$h'(x)=e^{-x}(g'(x)-g(x))=-e^{-x}f(x)\le 0,$$
i.e. $h$ is decrearing on $\mathbb R$.
Since $g$ is a polynomial, $\lim\limits_{x\to+\infty}h(x)=0$. It follows that $h\ge 0$, and hence $g\ge 0$. |
H: Proof using Möbius transformation
Let D be the open unit disc, and $f:D\to D$ an analitic function.
How can I prove that $|f'(0)|\le1$?
AI: For every $R\in (0,1)$ we have by Cauchy's Integral formula: $$|f'(0)|=\left|\frac{1}{2\pi i}\int_{|z|=R} \frac{f}{(z-0)^2} dz\right|\le \frac{1}{2\pi} \int_0^{2\pi} \frac{\max_{|z|=R}|f|}{|z|^2}\le \frac{1}{R}.$$ But that means that $|f'(0)|\le \frac{1}{R}$ for every $R\in (0,1)$, so certainly $|f'(0)|\le 1$. |
H: Every $v \in V - \{ 0 \}$ is cyclic iff the characteristic polynomial of $T : V \to V$ is irreducible over $F$
Let $V\neq \{0\}$ be a vector space over $F$, and $T$ a linear operator on $V$. Prove that every $0\neq v \in V$ is a cyclic vector if and only if the characteristic polynomial of $T$ is irreducible over $F$.
I can't seem to get anywhere on either side... Can someone please help?
AI: One direction is easy: if $v\neq0$ is not a cyclic vector, then the span of the vectors $T^i(v)$ for $i\geq0$ is not the whole space but is $T$-stable; therefore the characteristic polynomial of $T$ restricted to that space is a nontrivial strict divisor of the characteristic polynomial of$~T$, which is therefore reducible.
The other direction is harder; it requires showing that if the characteristic polynomial is reducible then there exists a nonzero non-cyclic vector for $T$. I seem to need the following trick: pick any nonzero vector; if it is non-cyclic, we are done, so assuming it is cyclic the minimal polynomial in $T$ that kills this vector has degree $\dim V$, so it must be equal to the characteristic polynomial. Then the minimal polynomial is equal to the characteristic polynomial, and hence (also) reducible. Now take a proper nontrivial factor $P$ of the minimal polynomial and evaluate it in $T$ giving a linear operator $P(T)$. Using minimality of the minimal polynomial show that $P(T)$ is neither zero nor invertible; its kernel then is a nonzero proper $T$-stable subspace, and the nonzero vectors in it are not cyclic.
Added. Thinking about this, it strikes me that the statement to prove, although pleasingly compact, is not really the best formulation; the situation is more completely described as follows, with the statement as corollary.
Proposition. Let the operator $T$ on the finite dimensional $F$-vector space $V$ have minimal polynomial $\mu_T\in F[X]$. If $\deg\mu_T<\dim V$ then $V$ has no cyclic vectors at all, if $\mu_T$ is reducible then $V$ has nonzero non-cyclic vectors, and when neither of these is the case one has $\deg\mu_T=\dim V$ and all nonzero vectors of$~V$ are cyclic.
Then by the Cayley-Hamilton theorem (which is not used for the proposition itself), if the characteristic polynomial irreducible then the minimal polynomial must be equal to it and we are in the final case; conversely if all nonzero vectors are cyclic then $\mu_T$ must be irreducible (or $1$, if $\dim V=0$) and equal to the characteristic polynomial.
Proof. For a vector$~v$ denote by $\mu_{T,v}$ the monic polynomial$~P$ in$~F[X]$ of least degree for which $P(T)(v)=0$; then $\deg\mu_{T,v}\leq\dim V$, and $v$ is a cyclic vector if and only if $\deg\mu_{T,v}=\dim V$. Since $\mu_T(T)(v)=0$ one always has $\mu_{T,v}\mid\mu_T$, which implies that if $\deg\mu_T<\dim V$ there cannot be any cyclic vectors. If $\mu_T=PQ$ with $P,Q\in F[X]$ non-constant, then neither $P(T)$ nor $Q(T)$ are zero operators by minimality of $\mu_T$; but their composition is zero, so the subspace $\ker P(T)$ contains the image of $Q(T)$ and is a nonzero proper $T$-stable subspace, whose nonzero vectors are therefore non-cyclic; this establishes the second case. Conversely if $\mu_T$ is irreducible then every nonzero vector$~v$ has $\mu_{T,v}=\mu_T$, so in particular $\deg\mu_T\leq\dim V$, and if $\deg\mu_T=\dim V$ these vectors are all cyclic.
Note that the proposition does not say whether there are any cyclic vectors when $\mu_T$ has degree$~\dim V$ but is reducible; this is in fact true, but requires more detailed considerations to prove. |
H: Let $I$ be an ideal generated by a polynomial in $\mathbb Q[x]$. When is $\mathbb Q[x] / I$ a field?
I was looking at my old exam papers and I was stuck on the following problem:
Let $I_1$ be the ideal generated by $x^4+3x^2+2$ and $I_2$ be the ideal generated by $x^3+1$ in $\mathbb Q[x]$. If $F_1=\mathbb Q[x]/I_1$ and $F_2=\mathbb Q[x]/I_2$, then which of the following options is/are correct?
1) $F_1$ and $F_2$ are fields
2) $F_1$ is a field but $F_2$ is not a field
3) $F_2$ is a field but $F_1$ is not a field
4) neither $F_1$ nor $F_2$ is a field
Can someone explains in details how to tackle this.
AI: The key points here are:
Since $\mathbb Q$ is a field, $\mathbb Q[x]$ is a Euclidean domain.
In a principal ideal domain, a nonzero prime ideal is maximal.
$R / I$ is a field iff $I$ is maximal (where $R$ is any commutative ring and $I$ an ideal in $R$).
In a unique factorization domain, an element is prime iff it's irreducible.
Using these points, we find that $Q[x]/I_j$ is a field iff $I_j$ is irreducible.
$x^3 + 1$ is clearly reducible since it has a root ($x = -1$).
By the rational roots theorem, $x^4 + 3x^2 + 2$ doesn't have roots in $\mathbb Q$, but it may be factored into two polynomials of degree $2$. Write $x^4 + 3x^2 + 2 = (x^2 + ax +b)(x^2 + cx + d)$ and solve for $a, b, c, d$ to see if this is the case. Solving this can be simplified by noting that the coefficients of $x^4 + 3x^2 + 2$ are integers. By Gauss's lemma, if it's irreducible over $\mathbb Z$, it's also irreducible over $\mathbb Q$. |
H: Why does this inequality hold: $4n+2\le4n\log{n}+2n\log{n}$
Why is the following true? (I came across this in an algorithm analysis book but this inequality is not related to algorithm analysis)
$$
4n+2\le4n\log{n}+2n\log{n}
$$
AI: Well based on the comments, it appears there is no mistake and the OP wants to see how the following is true
$$
4n + 2 \le 4n \log n + 2n \log n
$$
First we notice that this is equivalent to
$$
2n + 1 \le 2n \log n + n \log n = 3n \log n = \log \left( n^{3n} \right)
$$
We can exponentiate both sides with base $e$ and the sign stays the same since $e^x$ is always increasing
$$
\implies e^{2n+1} \le n^{3n}
$$
Now we can use induction to prove this $\forall \; n \ge 3$
$$
e^{2(3)+1} = e^7 = 1096.63315843 \le 19683 = 3^9 = 3^{3(3)}
$$
So our base case of $n=3$ is proven. Now let us assume $e^{2n+1} \le n^{3n}$ is true and try to prove that $e^{2(n+1)+1} \le (n+1)^{3(n+1)}$ is true. It's clear that $\forall \; n \ge 3$ that
$$
(n+1)^{3n+3} = (n+1)^{3(n+1)} = \left( n^3 + 3n^2 + 3n + 1\right)^{n+1} \ge n^{3n}e^2 \ge e^{2n+1}e^2 = e^{2n+3}
$$
If you want a more rigorous proof of the inequality $\left( n^3 + 3n^2 + 3n + 1\right)^{n+1} \ge n^{3n}e^2$ let me know. Otherwise $e^{2n+1} \le n^{3n}$ is true which implies that the wanted statement is true for all $n \ge 3$. |
H: Integral function
This exercise asks me to calculate the integral function with starting point $x=0$ of the following function:
\begin{equation}
y=
\begin{cases}
2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x<0\\
x+2\ \ \ \ \ \ \ \ 0\leq x\leq 2\\
4\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x>2\\
\end{cases}
\end{equation}
Is my solution correct?
I wrote:
\begin{equation}
F(x)=
\begin{cases}
\int_0 ^x 2 dt\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x<0\\
\int_0 ^x (t+2) dt\ \ \ \ \ \ \ \ 0\leq x\leq 2\\
\int_2 ^x 4 dt\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x>2\\
\end{cases}
\end{equation}
Thank you very much.
AI: The third part is wrong, and the whole is incomplete: you should evaluate the integrals. Thus, for example, for $x<0$ you should have
$$F(x)=\int_0^x2\,dt=[2t]_0^x=2x\;.$$
Your third part, $\int_2^x4\,dt$, is wrong because you’re supposed to be integrating from $0$, not from $2$. Thus, it should be
$$\int_0^2(t+2)dt+\int_2^x4\,dt\;;$$
I’ll leave it to you to finish evaluating that and the middle part. |
H: Angle between matrices
This is the problem from my homework:
If $A$ is antisymmetric matrix, and $S$ is symmetric matrix where $A,S \in M_n (\mathbb{R})$, determine the angle between them according to the inner product defined as $(A,B)=\text{tr}(AB^T)$.
I have tried calculating the angle between two pairs of concrete matrices from space $M_2(\mathbb{R})$, and the angles don't turn out to be the same. Am I doing something wrong? If it's even possible, how to determine that angle? Thanks.
AI: Hint: As $A$ is antisymmetric, $S$ symmetric, we have
$$ SA^t = -SA, \quad AS^t = AS $$
Hence
$$\def\tr{\mathop{\rm tr}}\tr(SA^t) = -\tr(SA) = -\tr(AS) = -\tr(AS^t) $$ |
H: Does the following number-theoretic equation have any solutions?
Does the following number-theoretic equation have any solutions for prime $p$ and positive $x$?
$$\frac{\sigma(p^x)}{p^x} = \frac{10}{9}$$
It does not have any for $x = 1$. How about for $x > 1$?
(Note that $\sigma = \sigma_1$ is the sum-of-divisors function.)
AI: If $\frac{\sigma(p^x)}{p^x} =\frac{p^{x+1}-1}{p^x(p-1)}= \frac{10}{9},$then
$$9p^{x+1}-9=10p^{x+1}-10p^{x},p^x(p-10)=-9,p\mid 9,p=3,3^x\cdot7=9$$
so there is no such integer $x>1.$ |
H: Every orthogonal operator is diagonalizable?
Answer is false and the rotation is a counterexample.
But I can't understand well.
Let $A=\begin {pmatrix}0&-1\\1&0\end{pmatrix}$ then it is rotation and also orthogonal operator. I think it can be diagonalizable by eigenvalue $i,-i$.
AI: They (implicitly) meant "diagonalizable over the reals", and making a diagonal matrix whose entries are $i$ and $-i$ are not allowed when we are requiring all of our matrices' entries be real numbers. From the relevant Wikipedia article:
In general, a rotation matrix is not diagonalizable over the reals, but all rotation matrices are diagonalizable over the complex field. |
H: Linear vs nonlinear differential equation
How to distinguish linear differential equations from nonlinear ones?
I know, that e.g.:
$$
y''-2y = \ln(x)
$$
is linear, but
$$
3+ yy'= x - y
$$
is nonlinear.
Why?
AI: Linear differential equations are those which can be reduced to the form $Ly = f$, where $L$ is some linear operator.
Your first case is indeed linear, since it can be written as:
$$\left(\frac{d^2}{dx^2} - 2\right)y = \ln(x)$$
While the second one is not. To see this first we regroup all $y$ to one side:
$$y(y'+1) = x - 3$$
then we simply notice that the operator $y\mapsto g(y) = y(y'+1)$ is not linear (for example we can take two functions $y_1$ and $y_2$ and notice that $g(y_1+y_2)\neq g(y_1) + g(y_2)$). |
H: proving congruence of a number modulo 17
We need to prove that $3^{32}-2^{32}\equiv0\pmod{ 17}$.How can we do that?
I tried to express them modulo $17$ in such a way that both cancel out.Really has not helped much.A little hint will be appreciated.
AI: HINT:
Using Fermat's Little theorem, $$a^{16}\equiv1\pmod{17}\text{ if }(a,17)=1$$
$$\implies a^{16b}\equiv1^b\pmod{17}\equiv1 \text{ where } b \text{ is any integer} \ge0$$ |
H: Spectral radius and operator norm
Consider a FINITE endomorphism $A$ , then I was wondering whether the relation between the operator norm and the spectral radius $\rho$, given by:
$\|A\| \ge \rho(A)$ is true for all operator norms or only the 2-norm?
AI: It is true, not only for all operator norms, but also for all submultiplicative matrix norms: for any eigenpair $(\lambda,x)$ of $A$, repeat $x$ to form the columns of a square matrix $X$. Then $|\lambda|\|X\|=\|\lambda X\|=\|AX\|\le\|A\|\|X\|$ and hence $|\lambda|\le\|A\|$. |
H: Geometrically distributed RV's - what's wrong with my reasoning?
Let $X$ and $Y$ be iid random variables distributed geometrically with probability of success $p$ and support $\mathbb{N}=\{0,1,2,\cdots\}.$ So in particular, letting $q=1-p$, we have that $$\mathbf{P}(X=k)=\mathbf{P}(Y=k)=pq^k\quad\forall k \in \mathbb{N}.$$
I'm trying to find $$\mathbf{P}(X=x\mid X+Y=k)$$ for $x,k \in \mathbb{N}.$ As an intermediate goal, I'm trying to find $$\mathbf{P}(X=x,\, X+Y=k)$$
But the result I'm getting is independent of $x$ for some reason. Here's the logic:
$\mathbf{P}(X=x,\, X+Y=k)=$
$\mathbf{P}(X=x,\, Y=k-x)=$
$\mathbf{P}(X=x)\mathbf{P}(Y=k-x)=$
$pq^x pq^{k-x}=$
$p^2 q^k$
Surely the expression of interest isn't independent of $x$, so why is it vanishing?
AI: The result is perhaps at first a little surprising. But your calculation is perfectly correct. Continue on the path you are on, dividing by the easily computed $\Pr(X+Y=k)$ to find the conditional probability. You will end up with a familiar distribution. |
H: The principal ideal $(x(x^2+1))$ equals its radical.
Let $\mathbb R$ be the reals and $\mathbb R[x]$ be the polynomial ring of one variable with real coefficients. Let $I$ be the principal ideal $(x(x^2+1))$. I want to prove that the ideal of the ideals variety is not the same as its radical, that is, $I(V(I))\not=\text {rad}(I)$. I've reduced this to proving that $I=\text{rad}(I)$. How can I go about that?
AI: Let $f\in{\rm rad}( I)$, that is, $f^n\in I$. You can also think over the complex field: $f=a_n(x-z_1)(x-z_2)\dots (x-z_n)$ for some $z_j\in\Bbb C$. Then $f^n\in I$ means that among the complex roots of $f^n$ we also find the roots of $x(x^2+1)$, i.e., $-i,0,i$. But that means we already had $f\in I$. |
H: Understanding examples of subfield and prime subfield of a finite field
I have already taken a look at this answer. Somehow it did not answer my question.
As I can find, in various literatures,
A lecture note, Definition 4.1:
Let
$F$
be a field. A subset
$K$
that is itself a field under the operations of
$F$
is called a
subfield
of
$F$.
Another lecture note, Section 7.4.2: A subfield $G$ of a field $F$ is a subset of the field that is itself a field under the operations of $F$.
Now, if we consider the operations of the field to be $+ \bmod n$ and $\times \bmod n$. We find that $\mathbb{Z}_2$ and $\mathbb{Z}_5$ are both fields under these operations.
But in order to get a good feeling of subfields, we try to consider $\mathbb{Z}_{3^2} = \mathbb{Z}_9$. We find that this not a field under the afore stated operations.
Not all the non-zero elements, notably 3 and 9 ($\gcd(3,9) \not=1$ and $\gcd(6,9) \not=1$), do not have multiplicative inverses.
Indeed, as Wikipedia states,
Even though all fields of size $p$ are isomorphic to $\mathbb{Z}/p\mathbb{Z}$, for $n \ge 2$ the
ring $\mathbb{Z}/p^n\mathbb{Z}$ (the ring of integers modulo $p^n$) is not a field. The
element $p$ $(\bmod\ p^n)$ is nonzero and has no multiplicative inverse.
Looking for examples, we find one here for $GF(2^3)$. This is based on polynomials.
Now, coming to my original point on (understanding) subfield or prime subfield
of finite fields, please tell me,
Whether it is totally impossible to construct purely numerical examples of
fields of size $p^n$.
Given a (non-numerical) field of size $p^n$, (one can be found in page 90 (16) of this document), what is the best way to identify the subfield(s) and prime subfield? I appreciate an answer which nurtures my intuition, not a theoretical one which puts me deep in difficult mathematical terms.
AI: Let me address your first question. First, I want to argue that there is no precise meaning of "involving numbers only". For example, given a finite field $F$ of size $4$ constructed in the usual manner (quotient of a polynomial ring over $\mathbb{Z}/2\mathbb{Z}$), I can choose a set of numbers, say
$$S=\{37,\tfrac{5}{19},\pi,e\}$$
and, choosing a bijection of $S$ with $F$, use transport of structure to give $S$ the structure of a field. The field structure does not depend in any way on what the underlying set is "made of".
However, along the lines of what I think you are ultimately after, you can obtain finite fields of any possible order using larger rings of integers. For example, $\mathbb{Z}[i]/(3)$ is a finite field of size $9$, and $\mathbb{Z}[i]$ consists of very reasonable numbers,
$$\mathbb{Z}[i]=\{a+bi\mid a,b\in\mathbb{Z}\}.$$
Now let me addressr your second question. Let's use $\mathbb{F}_p$ to mean $\mathbb{Z}/p\mathbb{Z}$, a finite field of order $p$ - it is a very common notation that is slightly less cumbersome, but doesn't mean anything different, they are exact synonyms.
A finite field of order $p^n$ is often constructed by taking the polynomial ring $\mathbb{F}_p[x]$, choosing an irreducible polynomial $f\in \mathbb{F}_p[x]$ of degree $n$, and then making the field
$$F=\mathbb{F}_p[x]/(f).$$
Now, the division algorithm for polynomials tells you that each equivalence class in this quotient can be uniquely identified by a representative of degree $<n$. In other words,
$$\begin{align*}
F&=\{a_0+a_1x+\cdots +a_{n-1}x^{n-1}+(f)\mid a_0,a_1,\ldots,a_{n-1}\in\mathbb{F}_p\}\\\\
&=\left\{\,\overline{a_0+a_1x+\cdots +a_{n-1}x^{n-1}}\,\;\middle\vert\;a_0,a_1,\ldots,a_{n-1}\in\mathbb{F}_p\right\}\\\\\\
&=\{a_0+a_1\overline{x}+\cdots +a_{n-1}\overline{x}^{n-1}\mid a_0,a_1,\ldots,a_{n-1}\in\mathbb{F}_p\}
\end{align*}$$
Letting the symbol $\alpha$ be a stand-in for $\overline{x}$, you can think of $F$ as being $\mathbb{F}_p$ with a new element "$\alpha$" added in, where $\alpha$ is a root of $f$, and you can write $F=\mathbb{F}_p[\alpha]$.
Now, the prime subfield of $F$ is just the "constant" polynomials, i.e. the ones with no $\alpha$'s in them:
$$\text{the prime subfield of }F=\{a_0+0\alpha+\cdots+0\alpha^{n-1}\mid a_0\in\mathbb{F}_p\}$$
and for each divisor $d\mid n$, the unique subfield of $F$ of order $p^d$ is the collection of polynomials in $\alpha$ whose terms are those of exponents that are multiples of $n/d$:
$$\text{the subfield of }F\text{ of order }p^d=\{a_0+a_1\alpha^{n/d}+\cdots+a_{d-1}\alpha^{(d-1)n/d}\mid a_0,a_1,\ldots,a_{d-1}\in\mathbb{F}_p\}$$
(clearly, the above set has cardinality $p^d$, because it takes $d$ elements of $\mathbb{F}_p$ to specify a given element of the above set, namely, each of the coefficients of the powers of $\alpha$. To see that it is a field, remember that $(a+b)^p=a^p+b^p$ in a field of characteristic $p$.) |
H: Uniform convergence of a power series
I'm new to this subject and would very much appreciate your help with this question. I'm not really sure about how to approach this.
$$f(x) = \sum_1^\infty\frac{1}{n}x^n$$
If I'm not mistaken the domain of convergence of this function is $x=[-1,1)$.
I need to check if it converges uniformly on $ [0,1)$.
I need to check if it converges uniformly on $(-1,0]$.
To calculate the function which the series converges to within the radius of convergence, please notice that the function should be for the whole radius of convergence and only for $[-a,a]$ for $0\leq a<R$.
AI: The radius of convergence $R=1$ and the interval of convergence is $[-1,1)$
Since $\lim_{x\to 1} \frac{x^n}{n}=\frac{1}{n}$ and the series
$$\sum_{n=1}^\infty \frac{1}{n}$$
is divergent then the given series isn't uniform convergent on $[0,1)$
Let $s=-x$ then we have
$$\sum_{n=1}^\infty \frac{x^n}{n}=\sum_{n=1}^\infty \frac{(-1)^ns^n}{n}\quad s\in[0,1)$$
so this series is alternating so we have
$$\left|\sum_{k=n+1}^\infty \frac{(-1)^ks^k}{k}\right|\leq\frac{1}{n}\to0\quad \forall s$$
hence we have the uniform convergence on $(-1,0]$ for the given series.
We can prove easily that the series $\sum_n t^n$ is uniformly convergent on every $[-x,x]$ with $0<x<1$ so by integration term by term we have
$$-\log(1-x)=\int_0^x\frac{dt}{1-t}=\int_0^x \sum_{n=0}^\infty t^n dt=\sum_{n=0}^\infty\frac{x^{n+1}}{n+1}=\sum_{n=1}^\infty\frac{x^{n}}{n}=f(x)$$ |
H: How to solve a equation with floor in it?
I tried to do everything I could, but I don't know what to do with that floor.
$58 = y\cdot\left[\frac{80}{y}\right]$
Where $[x]$ is floor function.
AI: HINT:
So, $\frac{58}y= \left[\frac{80}{y}\right] $ which is an integer
$\implies y$ must divide $58$ |
H: Ring without zero divisors that has positive characteristic must have prime characteristic
Let $R$ be an integral domain and suppose $R$ has characteristic $n > 0$.
Prove that $n$ must be prime.
I just proved this exercise, but I think it needs extra conditions. We can prove the statement if $R$ is ring without zero divisors. It's not needed $R$ to be commutative or to have an identity.
Here is the proof.
Let $n$ is characteristic which is not prime. So $n = mk$ and neither $m$ nor $k$ are characteristic so exists $a,b\in R$ for $ma\ne0$ and $kb\ne 0$. So we have $0 = n(ab) = mk(ab)=(ma)(kb)$ which contradicts that $R$ hasn't zero divisors.
What is wrong with this proof?
Definition 2.16 If $R$ is an arbitrary ring and there exists a positive
integer $n$ such that $nr = 0$ for every $r \in R$ (i.e. $r$ added to itself $n$
times is the zero element) then the least such positive integer $n$ is
called the characteristic of $R$, and $R$ is said to have positive
characteristic. If no such positive integer $n$ exists, $R$ is said to
have characteristic $0$.
AI: Your proof does not work for the zero ring (fortunately, that is not an integral domain). It has non-prime characteristic $1$.
Apart from that, your proof is ok. |
H: Study regimen for discrete mathematics? - Lack high-school maths...
I have just gotten into college, and will be studying mathematics from next semester. (this course)
Unfortunately I did not study mathematics for the last 2-3 years of high-school mathematics.
What should I study for the next 50 days in preparation for the course?
AI: The "Schaum's Outline Series" volume on "Discrete Mathematics" is pretty good and seems to match the course description you linked to. It was written with this sort of purpose in mind. It has lots of problems, with solutions. It's also cheap. |
H: Switch place of 2 infinite summations
In our course about queuing theory sometimes this rule is used:
$$\sum_{n=0}^\infty \sum_{k=n+1}^\infty n\cdot k = \sum_{k=1}^\infty \sum_{n=0}^{k-1} n\cdot k$$
Another example (but without infinity):
$$\sum_{n=0}^m n \sum_{i=0}^{n-1} i = \sum_{i=0}^{m-1} i \sum_{n=i+1}^{m} n$$ for a given m.
These are just 2 examples but similar operations like these are used a few times and I can't seem to figure out what the general rule is. I also couldn't find it on the wikipedia page about summation (http://en.wikipedia.org/wiki/Summation).
Could someone explain/prove the rule or point me to a link where I can find information about it?
AI: It's more of a process than a rule, but here's how I would do it. Take your first sum $\sum_{n=0}^\infty \sum_{k=n+1}^\infty n\cdot k$, and look through first what values of $k$ does appear at all. Since the lowest value of $k$ anywhere in the sum is $1$, and $k$ is unbounded above, the outer summation has to be $\sum_{k=1}^\infty$.
Now we look at a given value of $k$, and see what values of $n$ permits that $k$ to appear in the original sum. We see that this happens when $k\geq n+1$ which means that $n\leq k-1$, so $k-1$ has to be the upper limit of the inner sum. The value $n=0$ is permitted by any value of $k$, and it is the lowest value of $n$ appearing, and therefore has to be the lower limit of the sum. The inner sum is therefore $\sum_{n=0}^{k-1}$.
I figured an illustration by picture might come in handy as well. Assume we have some function $f(n, k)$ (in your case $f(n, k) = nk$), and we want to find the sum
$$
\sum_{n=0}^\infty \sum_{k=n+1}^\infty f(n, k)
$$
Then the picture below is a picture of (a small part of) the first quadrant of the $nk$-plane (including the axes). The brown dots represents the points whose function value you are interested in:
Now, this picture is taken out of its original context, but works fine, assuming we have $n$ along the $x$-axis, and $k$ aling the $y$-axis. Then we see that if we want $k$ to be the first index of summation, we sww that it goes from $1$ to $\infty$, and that for each $k$, $n$ goes from $0$ to $k-1$. This way of drawing a picture of the terms you're interested in always works, also with finite sums. It's also a big help if you're ever going to do the same for integration. |
H: Does this inequality have any solutions in $\mathbb{N}$?
Does this (number-theoretic) inequality have any solutions $x \in \mathbb{N}$?
$$\frac{\sigma_1(x)}{x} < \frac{2x}{x + 1}$$
Notice that we necessarily have $x > 1$.
AI: Yes; in fact, almost any deficient number will do. Let us show that this is the case for the odd prime numbers $x\in\mathbb{P}$. We know that $\sigma_1(x)=x+1$. On the other hand, the inequality can be rewritten as
$$\sigma_1(x)<\frac{2x^2}{x+1}=2x-2+\frac{2}{x+1}.$$
Since $x+1\le2x-2$ and $\frac{2}{x+1}>0$ for $x\ge3$, the assertion follows.
By the way, I admire your work on odd perfect numbers; it is very educational! |
H: Concurrent chances calculation
I have a question I don't know how to solve correctly, hope you can help me.
Let's say I have something bad, a cancer or something, and I have each year 1 chance between 750 of dying. The more years I live with the cancer, the more years I am exposed to death.
I want to calculate what chances of dying I have if I live, let's say, 10 years. I know that the more years I live with cancer, the more chances I have to die, but how to calculate them exactly?
I know that is isn't just summing up the chances per year, because if I have a 50% chance of dying, that don't means that in 2 years I'm going to die for sure.
PD: I don't have something bad, is for calculating chances of dying because of smoking, travelling by car, plane, etc...
AI: Assuming independence of the "death lottery" over the years, the probability of surviving $n$ consecutive years is
$(1-p)^n$, hence the risk of dying is $1-(1-p)^n$. For example with two years and a 50% death probability, that is $p=0.5$ and $n=2$, you get $0.75$ as death probability instead of $1$. |
H: Complement of a Topology
Let $(X, \tau)$ topology, I was wondering, if given $$\tau'=\{A^C \mid A \in \tau\}$$ did $\tau'$ also a topology on $X$? If so, why? Thank you.
AI: Not in general as closed sets aren't usually stable under infinite unions. If $(X,\tau)$ has this property, we say that $(X,\tau)$ is an Alexandrov space, and then this is equivalent to open sets being stable under infinite intersections.
As an elementary example of how a space can fail to be Alexandrov, take $\mathbb{R}$ with the usual topology and consider the set of open sets $$A=\left\{\left(-\infty,\frac{1}{n}\right)\cup\left(1-\frac{1}{n},\infty\right)\:|\:n\in\mathbb{N}\right\}.$$ The set of complements of elements of $A$ is $$B=\left\{\left[\frac{1}{n},1-\frac{1}{n}\right]\:|\: n\in\mathbb{N}\right\}$$ and $\bigcup_n B=(0,1)$ which is not a complement of an element of the usual topology on $\mathbb{R}$ as it is a non-trivial open set and so is not closed. |
H: Can the inverse of an element of a countable set approach infinity?
Say we have a countable set $A$.
$f:\mathbb{N}\to A$.
Can we say that there exists at least one element $a\in A$ such that $f^{-1}(a)$ is greater than any number $n\in \mathbb{N}$ we choose. Remember that $a$ is an element we explicitly know.
AI: Approaching infinity does not mean being transfinite. Furthermore, the natural numbers are ordered without a maximum, so being "larger than any other number" means that you're not a natural number.
If we do not require $f$ to be injective, however, it is possible that $f^{-1}(a)$ is an unbounded set of integers. Namely, the following statement is true:$$\newcommand{\textsf}[1]{\mathsf{\text{#1}}}\textsf{For all }{\it n}\textsf{ there exists }{\it m}\textsf{ such that}\colon {\it n<m}\textsf{ and }f(m)=a.$$
In this sense, $f^{-1}(a)$ does "approach" infinity. It is possible, that $f\colon\Bbb N\to A$ is such that every $a\in A$ has an infinite preimage. At least, of course, when $A$ is countable. |
H: Evaluate $\int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$
Background: Evaluation of $\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx$
We can prove using the Beta-Function identity that
$$\int_0^\infty \frac{1}{(1+x^2)^\lambda}dx=\sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \quad \lambda>\frac{1}{2}$$
Differentiating the above equation with respect to $\lambda$, we obtain an expression involving the Digamma Function $\psi_0(z)$.
$$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^\lambda}dx = \sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \left(\psi_0(\lambda)-\psi_0 \left( \lambda-\frac{1}{2}\right) \right)$$
Putting $\lambda=2$, we get
$$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx = -\frac{\pi}{4}+\frac{\pi}{2}\log(2)$$
Question:
But, does anybody know how to evaluate $\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$?
Mathematica gives the values
$\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx = -\frac{G}{6}+\pi \left(-\frac{3}{8}+\frac{1}{8}\log(2)+\frac{1}{3}\log \left(2+\sqrt{3} \right) \right)$
$\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx = -\frac{\pi}{2}+\frac{\pi \log \left( 6+4\sqrt{2}\right)}{4}$
Here, $G$ denotes the Catalan's Constant.
Initially, my approach was to find closed forms for
$$\int_0^\infty \frac{1}{(1+x^2)^2(1+x^3)^\lambda}dx \ \ , \int_0^\infty \frac{1}{(1+x^2)^2(1+x^4)^\lambda}dx$$
and then differentiate them with respect to $\lambda$ but it didn't prove to be of any help.
Please help me prove these two results.
AI: The 2nd evaluation can be obtained from the residue theorem:
Using parity, write the integral as $\displaystyle \frac12\int_{-\infty}^{\infty}\frac{\ln(1+x^4)\,dx}{(1+x^2)^2}$.
Interpret this as a complex integral and pull the integration contour to, say, $i\infty$. The result will be given by the residue at 2nd order pole at $x=i$ and two integrals of the logarithm jump (equal to $2\pi i$) over the branch cuts emanating from $x=e^{i\pi/4}$ and $x=e^{3i\pi /4}$ in the radial directions.
For the first contribution, we have
$$2\pi i \cdot\mathrm{res}_{x=i}\frac{\ln(1+x^4)}{(1+x^2)^2}=\frac{\pi}{2}\left(\ln2 -2\right).$$
The integral over the branch cut $(e^{i\pi/4},e^{i\pi/4}\infty)$ is
$$2\pi i \int_{e^{i\pi/4}}^{e^{i\pi/4}\infty}\frac{dx}{(1+x^2)^2}=\frac{\pi i}{2}\left(\pi-\sqrt{2}-2\arctan e^{i\pi/4}\right),$$
and, similarly, for the second branch cut $(e^{3i\pi/4},e^{3i\pi/4}\infty)$ we find
$$2\pi i \int_{e^{3i\pi/4}}^{e^{3i\pi/4}\infty}\frac{dx}{(1+x^2)^2}=\frac{\pi i}{2}\left(\sqrt{2}-\pi-2\arctan e^{3i\pi/4}\right).$$
Combining everything, one obtains the answer:
\begin{align}
\frac12\left\{\frac{\pi}{2}\left(\ln2 -2\right)+\pi\, \mathrm{arccoth}\sqrt{2}\right\}=
-\frac{\pi}{2}+\frac{\pi}{4}\ln(6+4\sqrt{2}).
\end{align} |
H: Eigenvectors of inverse complex matrix
For a non-singular matrix, its pretty straightforward to prove that $\lambda$ is eigenvalue of $A$ if and only if $\frac{1}{\lambda}$ is eigenvalue of $A^{-1}$. Let $A$ be a non-singular matrix, $x$ an eigenvector of $A$ and $\lambda \neq 0$ its eigenvalue :
$$Ax = \lambda x \iff A^{-1}Ax = \lambda A^{-1}x \iff x \frac{1}{\lambda} = A^{-1}x$$
from this approach it seems to me that inverse matrix has reciprocal eigenvalues, but same eigenvectors.
However, consider the following matrix $$ A = \begin{pmatrix}1 & 0 & 0\\
-2 & -1 & -1\\
4 & 2 & 1
\end{pmatrix} $$
whose eigenvalues are $\left\{ 1,i,-i \right\} $. So eigenvalues of $A^{-1}$ are $\left\{1,\frac{1}{i},\frac{1}{-i} \right\} = \left\{ 1,-i,i \right\} $.
I tried putting this matrix to MATLAB and computing eigenvectors of it's inverse, but the elements in eigenvectors of $A^{-1}$ seem to be complex conjugates of the elements in eigenvectors of $A$. Where is the hidden flaw in the proof?
AI: There is now hidden flow in your proof, I'm sure you misinterpreted MATLAB's results. MATLAB will sort the eigenvalues and -vectors for $A$ and $A^{-1}$ in a particular order, $\{1, i, -i\}$, say. Note moreover that, as $A$ is real, if $v$ is an eigenvector corresponding to $\lambda$, its complex conjugate $\bar v$ is an $\bar \lambda$-eigenvector, as
$$ A\bar v = \overline{Av} = \overline{\lambda v} = \bar\lambda \bar v. $$
So the complex conjugate eigenvectors correspond to the complex conjugate eigenvalues of $A^{-1}$. |
H: Show that $\int^{\infty}_{0}\left(\frac{\sin(x)}{x}\right)^2 < 2$
I`m trying to show that this integral is converges and $<2$
$$\int^{\infty}_{0}\left(\frac{\sin(x)}{x}\right)^2dx < 2$$
What I did is to show this expression:
$$\int^{1}_{0}\left(\frac{\sin(x)}{x}\right)^2dx + \int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx$$
Second expression :
$$\int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx < \int^{\infty}_{1}\left(\frac{1}{x^2}\right)^2dx = \lim\limits_{b\to 0} {-\frac{1}{x}}|^b_0 = 1 $$
Now for the first expression I need to find any explanation why its $<1$ and I will prove it.
I would like to get some advice for the first expression. thanks!
AI: Hint: $$\lim_{x\to0}\frac{\sin x}{x}=1.$$ |
H: Laplace, Legendre, Fourier, Hankel, Mellin, Hilbert, Borel, Z...: unified treatment of transforms?
I understand "transform methods" as recipes, but beyond this they are a big mystery to me.
There are two aspects of them I find bewildering.
One is the sheer number of them. Is there a unified framework that includes all these transforms as special cases?
The second one is heuristic: what would lead anyone to discover such a transform in the course of solving a problem?
(My hope is to find a unified treatment of the subject that simultaneously addresses both of these questions.)
AI: The essential idea of many transforms is to change the basis in the space of functions with the hope that in the new basis the problem will simplify.
Let me give a finite-dimensional example. Suppose we have a $2\times2$ matrix $A$ and we want to compute $A^{1000}$. Direct approach would not be very wise. However, if we first diagonalize $A$ as $PA_dP^{-1}$ (i.e. rotate the basis by $P$), the calculation becomes much easier: the answer is given by $PA_d^{1000}P^{-1}$ and computing powers of diagonal matrix is a very simple task.
A somewhat analogous infinite-dimensional example would be the solution of the heat equation $u_t=u_{xx}$ using Fourier transform $u(x,t)\rightarrow \hat{u}(\omega,t)$. The point is that in the Fourier basis the operator $\partial_{xx}$ becomes diagonal: it simply multiplies $\hat{u}(\omega,t)$ by $-\omega^2$. Therefore, in the new basis, our partial differential equation simplifies and becomes ordinary differential equation.
In general, the existence of a transform adapted to a particular problem is related to its symmetry. The new basis functions are chosen to be eigenfunctions of the symmetry generators. For instance, in the above PDE example we had translation symmetry with the generator $T=-i\partial_x$. In the same way, e.g. Mellin transform is related to scaling symmetry, etc. |
H: Function with continuous derivative is continuous?
Is it true that if $\frac{d}{dx}f(x)$ is continuous, then $f(x)$ is continuous too?
If not, can you give a counterexample?
AI: Just the fact that your function $f(x)$ is differentiable is enough to prove that it is continuous. The derivative $\frac{d}{dx}f(x)$, need not even be continuous.
Please have a look here http://www-math.mit.edu/~djk/18_01/chapter02/proof04.html |
H: calculate $ F(x)= \int_{0}^{\sin x}\sqrt{1-t^2}\,dt $
Calculate $F'(x)$
I have this exercise in my worksheet, I am having a problem obtaining the correct answer which is as listed on the answer sheet $-\cos^2(x)$.
AI: You have $F(x)=G(\sin x)$ where $G(y)=\int_0^y\sqrt{1-t^2}dt$ satisfies, by the fundamental theorem of calculus, $G'(y)=\sqrt{1-y^2}$. So by the chain rule
$$F'(x)=G'(\sin x)\cos x=\sqrt{1-\sin^2x}\cdot \cos x=|\cos x|\cos x.$$
That's $\cos^2x$ on $[-\pi/2+2k\pi,\pi/2+2k\pi]$, and $-\cos^2 x$ on $[\pi/2+2k\pi,3\pi/2+2k\pi]$. |
H: An equation about a rectangle with given perimeter
I am doing a revision calculator paper and am stuck on an algebra question.
There is a picture of a rectangle. One side is $x-2,$ another side is $2x +1.$
The question is. Setup and solve an equation to work out the value of $x.$
The perimeter of this rectangle is $43$cm.
How do I do this? Sorry I am useless with algebra, and its worth 5 marks.
Thanks
AI: For the perimeter of a rectangle, we know that the perimeter is the sum of the lengths of all its sides. In a rectangle, opposite sides are equal in length.
$$\text{Perimeter}\; = 2\times\;\text{length}\; + 2\times\;\text{width}$$
$$2(x - 2)+ 2(2x +1) = \color{blue}{2x} \color{red}{\bf -4} + \color{blue}{4x} + \color{red}{\bf 2} =43$$
$$\color{blue}{6x} \color{red}{\bf - 2} = 43$$
$$6x -2 + {\bf 2} = 43 + {\bf 2}$$
$$6x = 45$$
$${\
Khc 16} \times 6x = {\bf \dfrac 16} \times 45$$
$$ x = \dfrac {45}{6} = \dfrac{{\bf 3}\times 15}{{\bf 3} \times 2} =\dfrac {15}{2}$$
$$\text{This gives us}\;\;x = \frac{15}{2} = 7\frac12 = 7.5\;\text{cm}$$ |
H: Bayes' Theorem problem - Enough information?
There are 5000 British people. A British is either an English, a Scott
or a Welsh. 30% of the British are Scottish, and the English are six
times the Welsh. The probability that a British has red hair is 0.25,
while the probability that a non-Welsh British has red hair is 0.2.
What is the expected number of Welsh with red hair?
Is it possible to solve this, given the above information?
R = the event that a british has red hair.
W = the event that a british is welsh.
P(R) = 0.5
P(W) = 0.1
P(R|W') = 0.2
I feel like I'm missing information about P(R|W), or am I completely off track?
Thanks!
AI: From the given information, $70\%$ are non-Scottish, so the English are $60\%$ and the Welsh are $10\%$.
Let $R$ be the event a person has red hair, and let $W$ be the event a person is Welsh. We want $\Pr(R\mid W)$. We are told that $\Pr(R)=0.25$, and $\Pr(R\mid W')=0.2$.
Solve the problem informally. Out of say $100$ British, we would have $25$ with red hair. Of the $100$ people, $90$ would be non-Welsh, accounting for $18$ red-haired people. So $7$ of the $10$ Welsh would have red hair. Scale up by multiplying by $50$.
Another way: One can also use the machinery of conditional probability. Start from
$$\Pr(R)=\Pr(R\cap W)+\Pr(R\cap W').\tag{1}$$
To calculate $\Pr(R\cap W')$, use the fact that
$$\Pr(R\mid W')=\frac{\Pr(R\cap W')}{\Pr(W')}.\tag{2}$$
You know $\Pr(R|W')$ and $\Pr(W')$, so you can find $\Pr(R\cap W')$. Now fom (1) you can find $\Pr(R\cap W)$, and then, since you know $\Pr(W)$, you can find $\Pr(R\mid W)$. |
H: Prove that $E_1E_2= E_2E_1 = E_2$
I have this problem about projections I don't understand, Can somebody help me please?
Let $V$ be a vector space over the field $F$ and let $E_1$ and $E_2$ are projections of V with image $R_1$ and $R_2$ and nullspaces $N_1$ and $N_2$ respectively. Suposse that $R_2\subset{R_1}$ and $N_1\subset{N_2}$ Prove that $E_1E_2= E_2E_1 = E_2$
Thanks for your help
AI: Recall that if $p$ is a projection and if $x\in \mathrm{Im}(p)$ then $p(x)=x$ and recall that $x-p(x)\in \ker(p)$
We have $E_2(x)\in R_2\subset R_1$ so $E_1(E_2(x))=E_2(x)$, moreover $x-E_1(x)\in N_1\subset N_2$ so $E_2(x-E_1(x))=0$ so $E_2(E_1(x))=E_2(x)$ and we conclude. |
H: Integral of $\int^{\infty}_0\frac{x}{x^4+1}\,dx$
I want to evaluate this integral and trying to figure how to do that.
$$\int^{\infty}_0\frac{x}{x^4+1}dx$$
What I did is:
I`m abandons the limits for now and do the following step:
$$\int^{\infty}_0\frac{x}{x^4+1}dx = \frac{1}{2}\int\frac{tdt}{t^2+1} $$
I can use this integral? $\rightarrow$ $\int \frac{x}{x^2+a}dx = \frac{ln|x^2+a|}{2}+C$? or I can do it another way?
thanks!
AI: Hint: Let $u=x^2$ and write your integral in the form
$$\int^{\infty}_0\frac{x}{x^4+1}dx = \int_0^{\infty} \frac{1}{(x^2)^2+1}x\,dx.$$ |
H: What is the use of the Dot Product of two vectors?
Suppose you have two vectors a and b that you want to take the dot product of, now this is done quite simply by taking each corresponding coordinate of each vector, multiplying them and then adding the result together. At the end of performing our operation we are left with a constant number.
My question therefore is what can we do with this number,why do we calculate it so to speak? I mean it seems almost useless to me compared with the cross product of two vectors (where you end up with an actual vector).
AI: Re: "[the dot product] seems almost useless to me compared with the cross product of two vectors ".
Please see the Wikipedia entry for Dot Product to learn more about the significance of the dot-product, and for graphic displays which help visualize what the dot product signifies (particularly the geometric interpretation). Also, you'll learn more there about how it's used. E.g., Scroll down to "Physics" (in the linked entry) to read some of its uses:
Mechanical work is the dot product of force and displacement vectors.
Magnetic flux is the dot product of the magnetic field and the area vectors.
You've shared the algebraic definition of the dot product: how to compute it as the sum of the product of corresponding entries in two vectors: essentially, computing $\;\mathbf A \cdot \mathbf B = {\mathbf A}{\mathbf B}^T.\;$
But the dot product also has an equivalent geometric definition:
In Euclidean space, a Euclidean vector is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by $\|\mathbf{A}\|.$ The dot product of two Euclidean vectors A and B is defined by
$$\mathbf A\cdot\mathbf B = \|\mathbf A\|\,\|\mathbf B\|\cos\theta,\quad\text{where $\theta$ is the angle between $A$ and $B.$} \tag{1}$$
With $(1)$, e.g., we see that we can compute (determine) the angle between two vectors, given their coordinates: $$\cos \theta =
\frac{\mathbf A\cdot\mathbf B}{\|\mathbf A\|\,\|\mathbf B\|}$$ |
H: For which values of $a$ the function $f(x) = \max(x^2+2x,a) $ can be differentiate
I want to find for what values of $a$ the function can be differentiate.
$$f(x) = \max(x^2+2x,a) $$
What I tried to do is:
$$f(x) = \begin{cases} x^2+2x &\text{if } a>\dots \\
a& \text{if } a<\dots \end{cases} $$
the derivative of $x^2+2x$ is $2x+2$ and the critical point is $-\frac{1}{2}$
I need some hint how to continue.
Thanks!
AI: Try to rewrite this in your own words and supply the details.
$y=x^2+2x$ gives $y'=2x+2$, the critical minimum point is $(-1,-1)$. Hence if $a<-1$ then $\max\{x^2+2x, a\}=x^2+2x$ which is differentiable everywhere. If $a>-1$ then $x^2+2x=a$ gives $x={-1\pm \sqrt{1+a}}$. Between these point $f(x)=a$ and so $f'(x)=0$, Outside of these points $f'(x)=2x+2$. At the points of intersection $f'=\pm 2\sqrt{1+a}$ on the parabola side and zero on the side between the roots. Hence the left right derivatives differ at these points unless $a=-1$.
In short for $a<-1$ the function $f$ is differentiable everywhere.
For $a>-1$ it fails to be differetiable at two points, hence it is not differentiable. |
H: On the fundamental theorem of field extensions
I'm re-reading the fundamental theorem of field extensions. (K is normal $\iff$ K is a factorization field.)
Assume $K=F(\alpha_1, \dots , \alpha_n)$, is the factorization field of $f\in F[x]$, over the field $F$. Then for each $\alpha\in K, \alpha = g(\alpha_1,\dots,\alpha_n)$, for some $g\in F[x_1\dots,x_n]$. Finally define $h\in K[x]$ by $$h=\prod_{\sigma\in S_n}(x-g(\alpha_{\sigma(1)},\dots,\alpha_{\sigma(n)}))$$
Here $S_n$ is the symmetric group of order $n!$ .
It was particularly hard for me to reason through proving that the coefficients of $h$ are polynomials (with coefficients in $F$) of the elementary symmetric polynomials of $\alpha_1, \dots , \alpha_n$. Furthermore by applying Vietas formulas for $f$ we find that its coefficients are symmetric polynomials (with integer coefficients) of its roots: $\alpha_1, \dots , \alpha_n$. My book hereby concludes that $h\in F[x]$.
I may be just tired by now but I feel like there is a step or two missing in the above conclusion. Any comments would be appreciated.
AI: Notice that the Vieta formula for each coefficients is unchanged when we permute the $\alpha_i$ by any element of the symmetric group. This means that each coefficient is a symmetric polynomial.
But all symmetric polynomials of the roots of a polynomial $f$ have their values in the base field. This is essentially the converse of Vieta's formulas. |
H: rank of a diagonal matrix + rank-one perturbation
Let $D$ be a $n \times n$ diagonal matrix, and $A$ is a $n \times n$ rank-one matrix that can be rewritten as $A=a\cdot b^T$, where $a$ and $b$ are $n \times 1$ vectors. Now what is the lower bound for the rank of the matrix $D+A$ ? Any suggestions are welcome.
AI: In general, one has $\def\rk{\mathop{\rm rank}}$
$$ \rk (A+B) \le \rk A + \rk B $$
for matrices $A, B \in \mathbb R^{n\times m}$ (note that the subspace generated by the union of the images of $A$ and $B$ contains the image of $A+B$). For your example, we have
$$ \rk D \le \rk(D+A) + \rk(-A) $$
as $\rk(-A) = 1$, we have
$$ \rk(D+A) \ge \rk D - \rk A = \rk D - 1. $$ |
H: Limit of $\lim\limits_{n\to \infty}\left(\frac{1^\frac{1}{3}+2^\frac{1}{3}+3^\frac{1}{3}+\dots+n^\frac{1}{3}}{n\cdot n^\frac{1}{3}} \right)$
I want to evaluate this limit.
$$\lim_{n\to \infty}\left(\frac{1^\frac{1}{3}+2^\frac{1}{3}+3^\frac{1}{3}+\dots+n^\frac{1}{3}}{n\cdot n^\frac{1}{3}} \right)$$
What I did is:
set $f(x)=x^\frac{1}{3}$
$$f(1)+\int^\infty_0 x^\frac{1}{3} \, dx<f(1)+f(2)+\dots+f(n)<f(n)+\int^\infty_0 x^\frac{1}{3}\,dx$$
how it can help me to evaluate this limit?
I can convert the right\left expressions to limits? or the whole inequality to expression of limit.
I need some advice here,
Thanks!
AI: Hint: An integral over $[0,\infty)$ does not look right to me, it seems better to observe that with your $f$ as above
\begin{align*}
\frac{1^{1/3} + \cdots + n^{1/3}}{n \cdot n^{1/3}} &= \frac{f(1/n) + f(2/n) + \cdots + f(1)}{n}\\
&= \sum_{i=1}^n f(i/n) \cdot \frac 1n
\end{align*} |
H: finding the maximum area of 2 circles
An equilateral triangle with height $h$ has 2 different incircles.
the bottom circle is tangent to the base of the triangle at the middle point of the base.
what should be the radius of the upper circle so the sum of the area of the circles will be maximum?
i tried to to find connection between the radius and $h$ but i got that the radius should be $0$ and it doesn't seem right.
thanks.
btw, the answer should be $\frac{1}{9}h$
AI: If the lower circle is inscribed as above (with radius $DE=r$) then, the maximum circle that can be inscribed above it is the inscribed circle having the three sides of $\Delta CGH$ as tangents, $GH$ parallel with $AB$ and $CE=h$ perpendicular to $AB$. which implies that $\Delta CGH$ is also equilateral. With similar triangles one can work out the side length of $\Delta CGH$ which is $\cfrac {2\sqrt 3(h-2r)}{3}$ and the radius of its inscribed circle is $\cfrac {\sqrt 3}{6}\cdot \cfrac {2\sqrt 3(h-2r)}{3}$ and the only way this can be equal to $\cfrac h9$ is if $r=\cfrac h3$
UPDATE: We have two circles of radii, $r$ and $\cfrac{h-2r}3$ and the sum of their areas is $\pi r^2+\pi \cfrac{(h-2r)^2}9$. You can reduce the problem to finding the value of $r$ for which this area is maximum and then substitute the value in the expression above.
$$\max_ {0\le r \le h/3}\left( \pi r^2+\pi \cfrac{(h-2r)^2}9\right)$$ |
H: Distribution of Modular Expressions
This is really a programming question that I'm unable to solve because of a math question. I'm having a problem understanding the rules of distribution with modular arithmetic.
I have two expressions:
a = q mod c + (n - 1) mod c
b = q mod c - (n - 1) mod c
I made the mistake of concluding that the expressions were equivalent to:
a = (q + n - 1) mod c
b = (q - n + 1) mod c
What would the equivalent equations be?
AI: Mathematically, they're equivalent, but I think the difference is that in computers the first set you gave is going to potentially evaluate to outputs larger than $c$, while in mathematics we would automatically convert the output of the first two equations mod $c$ again so that the result is in $\{0\dots c-1\}$.
If you take the results of the top two equations mod $c$ again, you should get answers that match the outputs of the second box. |
H: Generalizing the central series to ordinal length
One can generalize the ascending and descending central series by transfinite induction, setting $G _{\alpha +1}=[G_\alpha, G]$ and $ G_\beta= \cap _{\alpha <\beta} G_\alpha $ (and analogously for the upper centers).
What is known about these series? Above all, if one of these series terminates, does the other series also terminate at the same ordinal ? Is there a characterization of the groups where it terminates?
AI: A free group $F$ is residually nilpotent, that is, its lower central series has $$F_{\omega} = \bigcap_{n \in \Bbb{N}} F_{n} = \{ 1 \},$$ but if $F$ is non-abelian, then $F$ is centreless, so its upper central series never gets a chance.
Consider now the locally dihedral group $G = D(2^{\infty})$, which is the extension of $Z(2^{\infty})$ by an involution inverting each element. Now $G$ is hypercentral (its upper central series terminates at $G$ after $\omega + 1$ steps), but the lower central series stops at $Z(2^{\infty})$. (See this article.) |
H: Question on linear maps defined in Khovanov homology
There are two linear maps $m:V \otimes V \rightarrow V$ and $\Delta:V \rightarrow V\otimes V$ in the definition of the differential of Khovanov homology. So my question is why do they map elements as below?
$$m:v_+ \otimes v_- \rightarrow v_-$$
$$m:v_+ \otimes v_+ \rightarrow v_+$$
$$m:v_- \otimes v_+ \rightarrow v_-$$
$$m:v_- \otimes v_- \rightarrow 0$$
and
$$\Delta:v_+ \rightarrow v_+ \otimes v_-+v_-\otimes v_+$$
$$\Delta:v_- \rightarrow v_- \otimes v_-$$
Thank you!
AI: First note that there are other definitions you could choose. Eun Soo Lee uses an altered differential (introduced in section 4 of this paper), and the resulting theory was used to great effect by Jake Rasmussen in his paper on slice genus.
For discussion of Khovanov's differential, I recommend Bar-Natan's explanation in his colorful introduction to Khovanov homology. Paraphrasing section 3.2, the differential ought to be of degree 0 and be invariant under any reordering of the cycles. This requires that the multiplication and comultiplication maps be of degree $-1$ and be commutative and co-commutative, respectively. Working in $\mathbb{Z}/2\mathbb{Z}$, this determines $m$ and $\Delta$. |
H: Rouche's theorem problem
Prove that the polynomial $z^n + nz-1$ has $n$ zeroes inside the circle with centre at $0$ and radius $1+\sqrt{2/(n-1)}$ for $n=3,4,\dotsc$
Please give me some hints as to how to apply Rouche's nicely without expanding binomials with square roots all over the place? Thanks!
AI: Hint:Do you know this one? $
(1+x)^n\ge 1+nx$
take $f(z)=z^n, g(z)=nz-1$ then show that $|f(z)|>|g(z)|$ on $|z|=1+\sqrt{{2\over(n-1)}}$
Hence $f, f+g$ has same number of root inside the specified circle |
H: There are no integers $x,y$ such that $x^2-6y^2=7$
How to show that there is no here are no integers $x,y$ such that $x^2-6y^2=7$?
Help me. I'm clueless.
AI: Hint: Show that, if there are any solutions $x$ and $y$, then it cannot be the case that both $x$ and $y$ are multiples of $7$. Then work modulo $7$. Which of $0,1,2,3,4,5,6$ is a square modulo $7$? What is $-6$ modulo $7$? |
H: Diophantine: $x^3+y^3=z^3 \pm 1$
How many nontrivial integer solutions does $x^3+y^3=z^3 \pm 1$ have?
The trivial solutions are $(\pm 1,z,z)$ and $(z,\pm 1,z)$.
AI: There are infinitely many solutions.
Parametric subset of solutions for $x^3+y^3=z^3+1$:
$x=9n^3+1$,
$y=9n^4$, $\qquad\qquad\qquad$ $(n\in \mathbb{N})$;
$z=9n^4+3n$.
Parametric subset of solutions for $x^3+y^3=z^3-1$:
$x=9n^3-1$,
$y=9n^4-3n$, $\qquad\qquad\qquad$ $(n\in \mathbb{N})$;
$z=9n^4$.
Of course, there are other solutions too (out of described subsets):
$64^3+94^3=103^3+1$, $\qquad$ $135^3+235^3=249^3+1$, $\quad\ldots\;$;
$135^3+138^3=172^3-1$, $\quad$ $372^3+426^3=505^3-1$, $\quad\ldots\;$.
Explanation:
How to obtain these subsets?
I printed starting solutions (using brute force) of equation
$x^3+y^3=z^3+1$.
$(x,y,z) : \alpha$, $\qquad$ where $x\leqslant y, \quad \alpha = y/x$;
$(9, 10, 12) : 1.11111$
$(64, 94, 103) : 1.46875$
$(73, 144, 150) : \color{#FF2200}{1.9726}$
$(135, 235, 249) : 1.74074$
$(334, 438, 495) : 1.31138$
$(244, 729, 738) : \color{#FF2200}{2.9877}$
$(368, 1537, 1544) : 4.17663$
$(1033, 1738, 1852) : 1.68248$
$(1010, 1897, 1988) : 1.87822$
$(577, 2304, 2316) : \color{#FF2200}{3.99307}$
$(3097, 3518, 4184) : 1.13594$
$(3753, 4528, 5262) : 1.2065$
$(1126, 5625, 5640) : \color{#FF2200}{4.99556}$
$(4083, 8343, 8657) : 2.04335$
$(5856, 9036, 9791) : 1.54303$
$(3987, 9735, 9953) : 2.44169$
$(11161, 11468, 14258) : 1.02751$
$(1945, 11664, 11682) : \color{#FF2200}{5.99692}$
$(13294, 19386, 21279) : 1.45825$
$(3088, 21609, 21630) : \color{#FF2200}{6.99773}$
$(10876, 31180, 31615) : 2.86686$
$(27238, 33412, 38599) : 1.22667$
$(27784, 35385, 40362) : 1.27357$
$(16617, 35442, 36620) : 2.13288$
$(4609, 36864, 36888) : \color{#FF2200}{7.99826}$
$\ldots$
and observed that some $\alpha$ are almost integer (I denoted them by red color).
So, I created "red list" $-$ subset of solutions:
$\color{#AAAAAA}{(10, 9, 12) : 0.9}$
$(73, 144, 150) : \color{#FF2200}{1.9726}$
$(244, 729, 738) : \color{#FF2200}{2.9877}$
$(577, 2304, 2316) : \color{#FF2200}{3.99307}$
$(1126, 5625, 5640) : \color{#FF2200}{4.99556}$
$(1945, 11664, 11682) : \color{#FF2200}{5.99692}$
$(3088, 21609, 21630) : \color{#FF2200}{6.99773}$
$(4609, 36864, 36888) : \color{#FF2200}{7.99826}$
$\ldots$
Then observed regularity $(9n^3+1,9n^4,9n^4+3n)$.
It is easy to prove that it is true:
$z^3+1-x^3-y^3 = (9n^4+3n)^3 + 1 - (9n^3+1)^3 - (9n^4)^3 =$
$(729n^{12}+729n^9+243n^6+27n^2) - 1 - (729n^9+243n^6+27n^3+1) - 729n^{12} = 0.
$
Same way $-$ for equation $x^3+y^3=z^3-1$. |
H: Gauss Lemma for Polynomials and Divisibility in $\mathbb Z$ and $\mathbb Q$.
I am working through Gauss Lemma and various corollaries of it. In the book Algebra of Michael Artin, I have a question to the proof of the following Theorem:
Theorem.
(a) Let $f,g$ be polynomials in $\mathbb Q[x]$, and let $f_0, g_0$ be the associated primitive polynomials in $\mathbb Z[x]$. If $f$ divides $g$ in $\mathbb Q[x]$, then $f_0$ divides $g_0$ in $\mathbb Z[x]$.
(b) Let $f$ be a primitive polynomial in $\mathbb Z[x]$, and let $g$ be any polynomial with integer coefficients. Suppose that $f$ divides $g$ in $\mathbb Q[x]$, say $g = fq$, with $q \in \mathbb Q[x]$. Then $q \in \mathbb Z[x]$, and hence $f$ divides $g$ in $\mathbb Z[x]$.
(c) Let $f, g$ be polynomials in $\mathbb Z[x]$. If they have a common nonconstant factor in $\mathbb Q[x]$, then they have a common nonconstant factor in $\mathbb Z[x]$ too.
Proof: To prove (a), we may clear denominators so that $f$ and $g$ become primitive. Then (a) is a consequence of (b). To prove (b), we apply (3.1) in order to write the quotient in the form $q = cq_0$, where $q_0$ is primitive and $c \in \mathbb Q$. By Gauss's Lemma, $fq_0$ is primitive, and the equation $g = cfq_0$ shows that it is the primitive polynomial $g_0$ associated to $g$. Therefore $g = c g_0$ is the expression for $g$ referred to in Lemma (3.1), and $c$ is the content of $g$. Since $g \in \mathbb Z[x]$, it follows that $c \in \mathbb Z$, hence that $q \in \mathbb Z[x]$. Finally, to prove (c), suppose that $f,g$ have a common factor $h$ in $\mathbb Q[x]$. We may assume that $h$ is primitive, and then by (b) $h$ divides both $f$ and $g$ in $\mathbb Z[x]$.
The following theorem are proofed in the chapter before.
(3.1) Lemma. Every nonzero polynomial $f(x) \in \mathbb Q[x]$ can be written as a product
$$
f(x) = cf_0(x),
$$
where $c$ is a rational number and $f_0(x)$ is a primitive polynomial in $\mathbb Z[x]$.
(3.3) Theorem. Gauss's Lemma: A product of primitive polynomials in $\mathbb Z[x]$ is primitive.
I have a question to the proof of part (c) of the Theorem. Why we can assume that the common factor $h$ is primitive? The notion of primitive polynomial is just definied for polynomials in $\mathbb Z[x]$, and $h$ is in $\mathbb Q[x]$? So to me this steps makes no sense...
AI: If we multiply $h \in \mathbb{Q}[x]$ by an integer constant with just enough prime factors to clear denominators, then $h$ becomes a primitive polynomial in $\mathbb{Z}[x]$. This new $h$ is still a common factor of $f$ and $g$ in $\mathbb{Q}[x]$ (because integer constants are units in $\mathbb{Q}[x]$). So we can assume we started with such a primitive $h$ as a common factor of $f$ and $g$. |
H: Integral substitutions.
For integrals of the form:
$$\intop_a^bg(t)dt,$$
we can apply the $tanh$-substitution to transform the integral into a doubly infinite integral, i.e:
$$\intop_a^bg(t)dt = \frac{b-a}{2}\intop_{-\infty}^\infty g \left( \frac{b+a}{2} \frac{b-a}{2}\tanh(u) \right)\mathrm{sech^2(u)} du.$$
My question is - Is there another substitution method we can use to transform:
$$\intop_a^bg(t)dt,$$
to a singly infinite integral, e.g: $$\intop_0^{\infty} g(f(u))du?$$
AI: You can do this in two steps:
First make a change of variable $t\rightarrow \frac{1}{b-a}(t-a)$. This will change the integration to be over the interval $(0,1)$.
Make a secondary change of variable $t\rightarrow\frac{1-t}{t}$. This will change the integration over $(0,1)$ to an integration over $(0,\infty)$. |
H: Factoring a third degree polynomial
I'm trying to find all solutions for $36x^3-127x+91=0$ with $x \in \mathbb{R}$. So, I tried to factor this polynomial. It can be written in the following way:
$$
(ax^2+bx+c)\cdot(dx+e)\quad (a,b,c,d,e \in \mathbb{Z})
$$
with
\begin{cases}
a \cdot d = 36 = 2^2 \cdot 3^2\\
a \cdot e + b \cdot d = 0\\
b \cdot e + c \cdot d = -127\\
c \cdot e = 91 = 7 \cdot 13
\end{cases}
How do I proceed from here? Should I guess possible values for $e$ ($±1,±7,±13,±91$) and see if it yields integer solutions for $a,b,c,d$? Or is there an easier method?
AI: $$36x^3-127x+91=0$$
$$36x^3-36x-91x+91=0$$
$$36x(x^2-1)-91(x-1)=0$$
$$36x(x+1)(x-1)-91(x-1)=0$$
$$(x-1)(36x(x+1)-91)=0$$
$$(x-1)(36x^2+36x-91)=0$$ |
H: How to evaluate binomial coefficients when $k=0$ and $1\geq|n|\geq0$
So I am doing some taylor series which boil down to the geometric series, for which I then need to evaluate various binomial coefficients. I have always used $n\choose k$ $=\frac{n!}{(n-k)!k!}$ to do these, this however becomes troublesome for non-integer and/or negative values of n.
Now I just memorized that if $k=0$, $n\choose k$ $= 1$ for integers $ n\geq 0$. But this still leaves me with negative and/or negative values for n. My textbook provides me with $n\choose k$ = $\frac{n(n-1)...(n-k+1)}{k!}$, but I`m still not sure how to evaluate this with $k=0$. Somehow I always get it wrong.
For example i get $-1\choose 0$ $=\frac{-1(-1-0+1)}{0!}=\frac{-1*0}{1}=0$, but it should be $1$. Whats my error? Thanks!
AI: I think you got confused by the notation $n(n-1)\dots(n-k+1)$. This is intended to mean the product of all the numbers you get when you start at $n$ and decrease, in steps of $1$ (to $n-1$ and so forth) until you reach $n-k+1$; so you don't include the factor that would come next, $n-k$. So there are $k$ factors here, the largest of which is $n$. Now when $k=0$, you're supposed to still use the same interpretation, not (as the notation made you reasonably think) start counting upward from $n$ instead of downward. So you should form a product of a sequence of numbers, starting at $n$ and decreasing in steps of $1$ down to and including $n+1$ (that's $n-k+1$) but not including $n$ (that's $n-k$). So you're supposed to stop before you start! In other words, when $k=0$, the expression $n(n-1)\dots(n-k+1)$ is intended to mean the product of no factors, which is $1$, not the product of two factors as you supposed. |
H: when Fourier transform function in $\mathbb C$?
The Fourier transform of a function $f\in\mathscr L^1(\mathbb R)$ is
$$\widehat f\colon\mathbb R\rightarrow\mathbb C, x\mapsto\int_{-\infty}^\infty f(t)\exp(-ixt)\,\textrm{d}t$$
When is this indeed a function in $\mathbb C$? Most of calculations you get functions in $\mathbb R$. When in $\mathbb C$?
Add: I know there are results like $\frac{e^{ait}-e^ {-ait}}{2i}=\sin(at)$ multiplied by 'anything', but I am asking for a function which you cannot write as a function in $\mathbb R$.
AI: Anything that is not symmetric, such as
$$f(t) = \begin{cases} \\ e^{-t} & t>0 \\ 0 & t< 0 \end{cases}$$
whose FT is
$$\hat{f}(x) = \frac{1}{1+i x}$$ |
H: span of $AA^T$ is the same as $A$?
Suppose $A$ is an $m$ by $n$ real matrix.
How do you prove that the span of columns of $AA^T$ is the same as columns of $A$?
AI: Note: I am using repeatedly the following property: $b \in Col(B)$ if and only if $Bx=b$ is consistent.
$Col(AA^T) \subset Col(A)$ is clear. We prove the other inclusion.
Let $b \in Col(A)$. Then, there exists some $x$ so that $Ax=b$.
Let $y=proj_{Col(A^T)}x$. This means that $y \in Col(A^T)$ and $x-y \perp Col(A^T)$.
Let $y= A^Tz$. We claim that $AA^Tz=b$.Indeed
$$AA^Tz=Ay \,.$$
and $x-y \perp Col(A^T)=Row(A) \Rightarrow A(x-y)=0 \Rightarrow Ay=Ax=b$.
P.S. This is just the basic idea behind why solving the Normal Equation solves the least square problems. |
H: X is infinite if and only if X is equivalent to a proper subset of itself.
Prove that a set $X$ is infinite if and only if $X$ is equivalent to a proper subset of itself.
If $X$ is finite, then suppose $|X|=n$. Any proper subset $Y$ of $X$ has size $m<n$, and so there cannot be any bijective mapping between $Y$ and $X$.
If $X$ is countably infinite, then suppose $X=\{x_1,x_2,\ldots\}$. We can map $X$ to $Y=\{x_2,x_3,\ldots\}$ by using the map $f(x_i)=x_{i+1}$.
But what if $X$ is uncountably infinite? How can we specify the mapping?
AI: The solution was in the previous title: if $X$ is infinite, then it contains a countable infinite subset, say $X_0$. Then you gave a bijection $X_0\to X_0\setminus\{x_1\}$, that extends to a bijection $X\to X\setminus\{x_1\}$, that acts as identity on $X\setminus X_0$. |
H: Solving Lagrange multipliers system
I need help solving this system:
$$
\begin{cases} 2(x-1) = \lambda2x \\ 2(y-2) = \lambda2y \\ 2(z-2) = \lambda2z \\x^2 + y^2+z^2 = 1 \end{cases}
$$
I can find $$ \lambda = (x-1)/x $$ but can't go further.
Any help?
AI: First three equations lead to:
$$\lambda = \frac{x-1}{x} = \frac{y-2}{y} = \frac{z-2}{z}.$$
Now subtracting 1 on every term implies
$$\lambda-1 = -\frac{1}{x} = -\frac{2}{y} = -\frac{2}{z}.$$
Now you found a relation of $x,y$ and $z$:
$$
2x = y = z.
$$
I believe you can take it from here. |
H: Using Newton's Generalized Binomial Theorem
I am trying to use Newton's Theorem in a proof with some inequalities. I have something of the form $(a+b)^c$ in my denominator, where $0<c<1$, and I'd like to find a sharp expression less than or equal to this with at least 2 terms (exactly 2 would be nice). Since it is not true in general that $(a+b)^c \ge a^c+b^c,$ my first idea was to try and use Netwon's Binomial Theorem to see if I can find an expression involving a,b,c which is less than $(a+b)^c$.
I am fiddling with an example to see if I understand:
Since $r\choose{k}$$=0$ when $r<k$,
$(5+2)^.5 = \sum_{k=0}^\infty $$.5\choose{k}$$5^{.5-k}2^k=5^.5,$ which is clearly wrong.
Where am I going wrong?
AI: The problem is that the statement "${r\choose k}=0$ for $r<k$" is true only for integers $r,k$. |
H: about the rank of block matrix
Let $m,n,$ and $k$ be positive integers and assume that $A\in {\mathbb{R}^{m\times n}}$,$B\in {\mathbb{R}^{k\times n}}$,( that is, A and B are matrices with real entries of sizes $m\times n$ and $k\times n$, respectively). Define $C=\left[\frac{A}{B}\right]$. Prove that $rank(C)\ge rank(A)$.
AI: The $\def\rk{\mathop{\rm rk}}$Rank of $A$ equals the dimension of the rowspace of $A$. Now the rowspae of $A$ is obviously a subspace of the rowspace of $[{A \atop B}] = C$, so $\rk A \le \rk C$.
If you want to argue with the column space, you can do as follows: Let $a_{i_1}, \ldots, a_{i_k}$ be independent columns of $A$, and $b_{i_1}, \ldots, b_{i_k}$ the corresponding columns of $B$. Suppose for some $\lambda_i \in K$ (the ground field), we have
$$ \sum_{j=1}^n \lambda_j \binom{a_{i_j}}{b_{i_j}} = 0 $$
this gives $\sum_j \lambda_j a_{i_j} = 0$ and linear indepence of the $a_{i_j}$ gives $\lambda_j = 0$ for all $j$. So $[{a_{i_j}\atop b_{i_j}}]$ are independent columns of $C$. As the rank is the maximal number of independent columns, this number is for $C$ as least as large as for $A$. |
H: Exterior Measure and Non-Measurable Sets
Can you help me to prove that a set $E\subset\mathbb{R}$ has non-measurable subsets (Lebesgue Measure) if and only if $m^{*}(E)>0$ ?
Somehow i couldn't generalize Vitali Set to prove $\Leftarrow$ direction. The other direction is not a problem !
Thank you :)
AI: Try this instead. Take a Bernstein set $B$ (a set that has nonempty intersection with every uncountable closed set and whose complement also has nonempty intersection with every uncountable closed set) in $\mathbb R$. Both $B$ and its complement $B^c$ are Bernstein sets, and at least one exists (this was shown by Bernstein).
Take $E \cap B$. Then every uncountable closed subset of $E$ meets $E \cap B^c$ at some point. Take any measurable subset $A \subset E \cap B$. Any closed set contained in $A$ must be countable. So $A$ must have measure $0$. In particular, if $E \cap B$ is measurable, then it has measure $0$. We can repeat the argument for $E \cap B^c$.
But if $E \cap B$ and $E \cap B^c$ are both measurable, then their union is measurable and must thus have measure $0$, a contradiction. |
H: Does the triangle inequality follow from the rest of the properties of a subfield-valued absolute value?
(This is a much more specific version of my earlier question from over a year ago.)
Let $F$ be a field, let $E$ be an ordered subfield of $F$, and let $\;\; |\hspace{-0.03 in}\cdot\hspace{-0.03 in}| \: : \: F \: \to \: E$
be such that for all members $x$ of $E$, $\:$ for all members $y$ and $z$ of $F$,
$(1)$ $0\leq x \: \implies \: |\hspace{.01 in}x\hspace{.01 in}| = x$
$(2)$ $0\leq |y\hspace{.01 in}|$
$(3)$ $|y\cdot z\hspace{.01 in}| \: = \: |y\hspace{.01 in}| \cdot |\hspace{.01 in}z\hspace{.01 in}|$
Does it follow that for all members $y$ and $z$ of $\hspace{.01 in}F$, $\;\; |y+z\hspace{.01 in}| \: \leq \: |y\hspace{.01 in}| + |\hspace{.01 in}z\hspace{.01 in}| \;\;\;\;$?
I suspect the answer is no, but I have not been able to come up with any counterexample.
AI: Let $F=\mathbb Q((X))$ be the field of formal Laurent series with rational coefficients, $E=\mathbb Q$. If $a=\sum_{k=n}^\infty a_k X^k$ with $a_n\ne 0$, let $|a|=|a_n|$ (and let $|0|=0$).
Then $(1),(2),(3)$ are readily verified.
But $|1+2X|= 1$, $|-1+2X|=1$, $|(1+2X)+(-1+2X)|=|4X|=4>1+1$. |
H: How should I numerically solve this PDE?
I am hoping to figure out the function $u(x,y,t)$ for some integer arguments when $u(x,y,0)$ is given (by figuring out I mean generating some images in MatLab), also time $t \ge 0$.
$$\frac{\partial u}{\partial t} = -(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}) - (\frac{\partial^4 u}{\partial x^4} + \frac{\partial^4 u}{\partial y^4}) - u \cdot (\frac{\partial u}{\partial x} + \frac{\partial u}{\partial y})$$
What should I do? I believe it is reasonable to express $\partial u / \partial t$ and relate to numerical integration which means something like
$$u(x,y,t+1) = u(x,y,t) + \ldots$$
The sum of second derivatives is known as Laplacian and approximated in this Wikipedia page so
$$u(x,y,t+1) = u(x,y,t) - (u(x-1,y,t) + u(x+1,y,t) + u(x,y-1,t) + u(x,y+1,t) - 4u(x,y,t)) + \ldots$$
I do not know the approximation of summed fourth and first derivatives, also how to take care of that directly used $u$ multiplier (although I guess it may be something like $u(x,y,t)+u(x,y,t-1)+u(x,y,t-2)+\ldots$).
AI: This question actually belongs to the Computational Science group rather than Mathematics. Just approximate each term in the right-hand side by finite difference, the needed formulas are summarized on http://en.wikipedia.org/wiki/Finite_difference_coefficients. Then sum up those RHS terms, see if there is any cancellation and do it analytically since numerical cancellation may be a source of numerical error. You will see that this procedure will reproduce the Laplacian finite-difference formula that you quote. So you will get a finite-difference approximation of your RHS, in principle should be able to go forward with time-integration. Using explicit time-integrator is not practical for this type of problem since the time-step has to be very small for stability of time integration. If you have access to an implicit time-integrator, like LSODE, then there will be much less problem with stability. But this is a long separate discussion. |
H: How can this implication be equal to the set of all possible evaluations?
I am having a hard time with this (simple) excercise in logic:
First, let us define the set $M$ which contains all possible evaluations.
Then, for each proposition sentence $A$, we define a set $[A] = \{ v \in M | v(A) = 1 \}$
So, the task is to show that $[A\rightarrow B]=M$ if and only if $[A]\subseteq[B]$
The reason why I am having a hard time understanding this is that imho M should contain every evaluation, including the one where $v(A)=1$ and $v(B)=0$. But, already the definition of $[A\rightarrow B]$ means that it can contain only those evaluations for which $v(A\land B)=1$.
So, even without the condition of $[A]\subseteq[B]$, how could $[A\rightarrow B]$ ever be the same group as the entire of $M$ if it by definition can't contain those evaluations in which it is not true?
I do not need a solution but simply an explanation of what I am not getting in this excercise...
AI: A valuation $v$ satisfies a conditional $\phi\to\psi$ when: if $v$ satisfies $\phi$, then $v$ satisfies $\psi$. Equivalently, $v$ satisfies $\phi\to\psi$ if it satisfies $\lnot\phi$ or satisfies $\psi$. (This is the equivalence between $\phi\to\psi$ and $\lnot\phi\lor\psi$.)
Suppose that the set of valuations that satisfy a sentence $\phi$, that is, $[\phi]$ is a subset of the set of valuations that satisfy a sentence $\psi$, that is, $[\psi]$. Symbolically,
$$ [\phi] \subseteq [\psi] \tag{1}$$
Now pick an arbitrary valuation $v_0$. If it satisfies $\phi$, then it is an element of $[\phi]$, and so, by (1), also of $[\psi]$. Then since $v_0$ makes both $\phi$ and $\psi$ true, it makes $\phi\to\psi$ true as well. On the hand, if $v_0$ does not satisfy $\phi$, then it also makes $\phi\to\psi$ true, because a conditional is true in the case that the antecedent is false. Since $v_0$ is arbitrary, we conclude that every valuation satisfies $\phi\to\psi$.
We started with the assumption that $[\phi] \subseteq [\psi]$, so our final conclusion is a conditional:
If the set of all valuations that make $\phi$ true is a subset of the set of all valuations that make $\psi$ true, that is, $[\phi] \subseteq [\psi]$, then every valuation in $M$ satisfies $\phi\to\psi$.
That's one direction of the final biconditional. The other direction is proved by supposing that $[\phi\to\psi] = M$. You are right in observing that for some $\phi$ and $\psi$, it is impossible that this is actually the case, but for others it is quite possible, even when $\phi$ and $\psi$ are propositional variables. For instance, consider the case $P \to P$, where $\phi$ is $P$ and $\psi$ is $P$. At any rate, the proof would proceed by asking what it means that every $[\phi\to\psi] = M$. It means that for every $v \in M$, $v$ satisfies $\phi\to\psi$. What does that mean? … And so on, until reaching the conclusion that if $v$ is in $[\phi]$, then $v$ is in $[\psi]$, which implies that $[\phi] \subseteq [\psi]$. (You said you didn't need a whole proof, so you can fill in the “…”.) This direction concludes that
If every valuation in $M$ satisfies $\phi\to\psi$, then the set of all valuations that make $\phi$ true is a subset of the set of all valuations that make $\psi$ true, that is, $[\phi] \subseteq [\psi]$.
Then the two are combined, and the biconditional is proven. |
H: Order type of the real algebraic numbers
As a countable, everywhere-dense, totally ordered set without minimal or maximal elements, $\Bbb{A}$, the set of real algebraic numbers, must be order isomorphic to $\Bbb{Q}$. I'm wondering how "nice" such an isomorphism can b made. Does it admit any reasonably explicit description?
Also, what would the asymptotic properties of an order isomorphism $\Bbb{A} \rightarrow \Bbb{Q}$ be? It seems like it would have to grow quite quickly, but I'm not sure how quickly.
AI: I don't think there's a nice-looking explicit isomorphism. On the other hand, given any $\varepsilon>0$ there exist isomorphisms that move no points by more than $\varepsilon$. Proof: Partition the real line into successive intervals with lengths $<\varepsilon$ and with transcendental endpoints. Then build the isomorphism separately within each of these intervals. |
H: Markov Chain Converging in Single Step
I have a markov kernel K. From this I find the invariant probability $\pi$. The question is to design a "dream" matrix K*, that converges in one step. Such that $\lambda_{SLEM}=0$ (second largest eigen-value modulus). I am not sure how to go about designing the dream matrix. Any pointers to literature/pointers would be welcome
AI: Consider three states $A,B,C$. We have that $A$ almost surely goes to itself and no other states. Let $B$ almost surely go to $A$ and to no other states. Let $C$ go to $A$ with probability $0.5$ and to $B$ with probability $0.5$.
The transition matrix may be seen as $M = \left( \begin{array}{ccc}
1 & 0 & 0 \\
1 & 0 & 0 \\
0.5 & 0.5 & 0 \end{array} \right) $
The eigenvalues of the associated transition matrix $M$ are $1,0,0$. The powers of the transition matrix $M$ are $M^2, M^3, \dots$, these just give us the matrix $A$ defined by
$A= \left( \begin{array}{ccc}
1 & 0 & 0 \\
1 & 0 & 0 \\
1 & 0 & 0 \end{array} \right) $
This converges in just one step.
EDIT:
Suppose we didn't know how to get there. We seek a transition matrix with an existing limiting distribution, then the Perron-Frobenius theorem tells us the spectral radius of this matrix must be $1$ with unique multiplicity one and all other eigenvalues smaller. So simply pick a matrix of order three by three with nine coefficients to be found and calculate what you need to satisfy those conditions. |
H: Terminologies related to "compact?"
A set can be either open or closed, and there can either be a finite or infinite number of them.
A "compact" set is one where every open cover has finite subcover.
Is there such a thing as a set that is covered by an infinite cover of open subsets, and what would it be called?
How about a set where the open cover has a finite cover of closed subsets?
AI: Here is something which you may be interested in:
open (closed) cover: A cover $\mathcal U$ of $X$ is called an open cover (or a closed cover) if each member of $\mathcal U$ is open (closed) in $X$.
Note that closed cover is not often appeared in the general topology. We always consider open covers of $X$.
Lindelof: A regular space $X$ is a Linfdelof space if and only if every open cover of $X$ has a countable subcover.
countably compact: A topological space $X$ is called countably compact pace if $X$ is a Hausdorff space and every countable open cover of $X$ has a finite subcover. |
H: Why any field is a principal ideal domain?
Why any field is a principal ideal domain?
According to the definition of P.I.D, first, a ring's ideal can be generated from a single element; second, this ring has no zero-divisor. This two conditions make a ring P.I.D.
But how to prove any field is P.I.D?
AI: Let $F$ be a field and $I \subset F$ be a nontrivial ideal. Then if $a \in I$ is nonzero, we have that $1 = a^{-1} \cdot a \in I$, where $a^{-1}$ exists since $F$ is a field and $a \neq 0$. Since $1 \in I$, for every element $b \in F$, $b = b \cdot 1 \in I$, so we have that $I = F = \langle 1 \rangle$ if $I \neq \{0\}$.
In conclusion, the only ideals of a field $F$ are $\langle 0 \rangle = \{0\}$ and $\langle 1 \rangle = F$, which are both principal ideals. |
H: Computing limit of $(1+1/n)^{n^2}$
How can I compute the limit $\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}\right)^{n^2}$? Of course $\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}\right)^{n} = e$, and then $\left(1+\frac{1}{n}\right)^{n^2} = \left(\left(1+\frac{1}{n}\right)^n\right)^n$. Since the term inside converges to $e$, the whole thing is $e^n$, and can I conclude that the limit is infinity as $n\rightarrow\infty$?
AI: There is a slight problem when you say "the whole things is $e^n$": what do you mean? Surely you know you cannot chose when to let the $n$s go to infinity, you have to make them large "at the same pace".
Take the logarithm of that, and see what happens. That is, you have $$x_n=\left(1+\frac 1 n\right)^{n^2}$$ Then $$\log x_n=n^2\log\left(1+\frac 1 n\right)$$
and $$\log\left(1+\frac 1n\right)=\frac 1n -\frac{1}{2n^2}+O\left(\frac 1{n^3}\right)$$
Alternatively, since $$\left(1+\frac 1 n\right)^n\to e $$
there exists $N$ such that whenever $n\geq N$, we have $$\left(1+\frac 1 n\right)^n\geq \frac{e}2\text{ (Why?) }$$
We have $\dfrac e2>1$ since $e>2$ and then $$x_n\geq r^n$$ whenever $n>N$ with $r>1$. |
H: distributing z different objects among k people almost evenly
We have z objects (all different), and we want to distribute them among k people ( k < = z ) so that the distribution is almost even.
i.e. the difference between the number of articles given to the person with maximum articles, and the one with minimum articles is at most 1.
We need to find the total number of ways in which this can be done
for example if there are 5 objects and 3 people the number of such ways should be 90.
I am not sure how we get this value.
AI: Let $k_1$ be the number of people that receive $n_1 = \lfloor \frac{z}{k}\rfloor $ objects, $k_2$ receive $n_2= n_1 +1$ objects. Clearly, $k_2 = z \bmod k$, $k_1+k_2=k$, $k_1 n_1 + k_2
n_2 = z$.
Assuming the objects are distinguishable, the total number of ways is
$$ {k \choose k_1} \frac{z!}{(n_1!)^{k_1} (n_2!)^{k_2}} $$ |
H: Proving every open subset $H$ of the plane $\mathbb{R^2}$ is the union of open discs
I am going through the proof of
Every open subset $H$ of the plane $\mathbb{R^2}$ is the union of open discs
in a book (open disc is the standard open Euclidean disc or open ball). It goes like this:
Since $H$ is open, for each point $p \in H$ there is an open disc $E_{p}$ such that $p \in E_{p} \subset H$.(I agree with this statement).
Now it tells $H = \bigcup\{E_{p} : p \in H\}$. This I did not understand. Can you help me?
AI: Proof: To see it, firstly let us prove that $H \subseteq \bigcup \{E_p: p\in H\}$. Since for any $p\in H$, there exists $E_p$ such that $p \in E_p \subseteq \bigcup \{E_p: p\in H\}$.
Then we prove $\bigcup \{E_p: p\in H\}\subseteq H$. It is not difficult to see it is true since each $E_p\subseteq H$. This complete the proof.
Another idea:
Notice all the open discs forms a base of $\Bbb R^2$. Then by the definition of topology, we can conclude that every open set is a union of open discs. |
H: Find the value of each number which derives an average given a starting point, end point and total of numbers used.
Given an average, a low number, a high number and the total amount of numbers used (stated) to derive the average, is there a formula to determine what the value of each number was, that amounted to the average?
Example:
Average amount: $272
Highest amount: $548
Lowest amount: $22
Total numbers: 21 (e.g 21 people offering a range of bids on an item)
AI: No, there is not. You have four equations in $21$ unknowns. You are given the lowest and highest, so really have two equations in $19$ unknowns. The fact that they are all within the range is not enough. In special cases, like when you know all the bids are exact numbers of dollars and all but the last are $1$ more than the previous, you can solve it. |
H: Question for mathematicians who started before the computer era: what constants did you have memorized, in what form, and why?
A former department chair at BYU, Wayne Barrett, would always amaze grad students by his vast knowledge of mathematical constants, like the radical form of $\cos(2\pi/5)$. I've never memorized anything beyond some digits of $e$, $\pi$, and $\sqrt2{}$, but I know many famous mathematicians could pull out constants and perform calculations by hand extremely well.
For those of you who began working before computers became big, what kind of things did you memorize?
AI: I would suggest calculators are more important to this than computers. It depends a lot on what you want to calculate. I didn't sit down to memorize constants, but like to do mental calculation, much of it involving naturals or estimates, so the following is my list. In the spirit of the question I will use $=$ where $\approx$ is more accurate:
$(1+x)^n = 1+nx$ for $nx \ll1$ Probably the most important, because you can correct other things with it.
All perfect powers up to $1000$
Powers of $2$ up to $2^{16}=65536$
Primes to $100$
$\log_{10} 2 = 0.30103$ or the similar $2^{10} = 10^3$
$\ln 10 = 2.3, \ln 2 = 0.69$
$1$ radian $=57.3$ or $60$ degrees (this so you can use the trig functions of the next line)
trig functions for $30,45,60$ degrees (not $15,75$, they don't come up for me)
$\sin x=x, \cos x=1-\frac {x^2}2, \tan x = x$
square roots of $2,3,10$ to three places
decimals for $\frac 1n$ for $n \in [2,12]$
triangle and Fibonacci numbers up to $100$ or so
Finally, $\pi = \sqrt {10}= 3$
Somebody recently was surprised that we don't all know ${49 \choose 6}$ because of lottery problems, but I don't.
I think this has little to do with mathematical ability. |
H: Basic discrete math question regarding translation of logic ↔ English
I just started Discrete Mathematics, and am having a little bit of trouble in understanding the conversions of English ↔ logic.
$p$: "you get an A on the final exam."
$q$: "you do every exercise in the book."
$r$: "you get an A in the class."
Problem 1
Convert to logic using $p$, $q$, and $r$:
You get an A on the final, but you don't do every exercise in the book. Nevertheless, you get an A in the class.
What's screwing me up is the word “nevertheless.” At first I wrote $p \land \lnot q \land r$, then I thought about it and changed it to $(p \land \lnot q) \to r$, and somehow that seems like a better answer, but I have no idea why.
What I find hard to understand is when a sentence requires a conditional ($\to$). Does the translation for the above sentences need a conditional? There was a question earlier in the homework that said to translate $p \lor q \lor r$ to English, but how do you translate a bunch of variables with no conditional into an English sentence that makes sense?
AI: You can take the word "nevertheless" as a clue indicating that what follows isn't implied by and clearly doesn't depend on, the preceding knowledge. "Nevertheless" could easily be replaced by the word "but" and still convey the same meaning, and as you know, we translate "but" as meaning "and":
"You get an A on the final, but you don't do every exercise in the book. But, you get an A in the class." This says pretty much what the original statement says.
If there were an implication (conditional) involved in the sequence of statements, you'd see something like "...Therefore, ..." or "As a consequence, ...".
So the correct translation here would indeed be $$(p \land \lnot q) \land r,\quad\text{or simply}\quad p \land \lnot q \land r$$ |
H: Computing limit of $(1+1/n^2)^n$
I want to compute the limit $\left(1+\frac{1}{n^2}\right)^n$. One way to do this is to take logs. So $$x_n=\left(1+\frac{1}{n^2}\right)^n$$ Then $$\log x_n = n\log\left(1+\frac{1}{n^2}\right) = n\left(\frac{1}{n^2}-\frac{1}{2n^4}+O\left(\frac{1}{n^6}\right)\right)$$So $\log x_n$ converges to $0$, and $x_n$ converges to $1$.
Is there a simpler way to do this, maybe without Taylor log expansion?
AI: From the Taylor Series, $$1+\frac{1}{n^2}\leq \exp\left(\frac{1}{n^2}\right)$$
Thus, $$1\leq \left(1+\frac{1}{n^2}\right)^n \leq \exp\left(\frac{1}{n}\right)$$
so by the Squeeze Theorem, the result follows $\square$ |
H: Limit of recursive sequence $a_n=\frac{(x/a_{n-1})+a_{n-1}}{2}$
Let $x$ and $y$ be positive numbers. Let $a_0=y$, and let $$a_n=\frac{(x/a_{n-1})+a_{n-1}}{2}$$Prove that the sequence $\{a_n\}$ has limit $\sqrt{x}$.
I rearranged the equation to be $a_n-\sqrt{x}=\dfrac{(a_{n-1}-\sqrt{x})^2}{2a_{n-1}}$. I think I should try to bound $a_n-\sqrt{x}$ to be close to $0$, but I don't know how I should take care of the $2a_{n-1}$ term in the denominator.
AI: Note that for $n\geq 1$, we have $a_n<a_{n-1}$ iff $a_{n-1}>\sqrt{x}$ (simple manipulation from the given recurrence.)
Also, $a_1,a_2,a_3,\ldots >\sqrt{x}$. (Complete the squares from the given recurrence.) So $a_1,a_2,a_3,\ldots$ is decreasing. Since the sequence is decreasing with lower bound, it must converge.
Suppose its limit is $L$. Then $L = \dfrac{x/L+L}{2}$, implying $L=\sqrt{x}$. |
H: Is there a problem when defining exponential with negative base?
Well, this question may seem silly at first, but I'll make my point clear. Suppose $n \in \Bbb N$ and suppose $a \in \Bbb R$ is any number. Then the definition of $a^n$ is clear for any $a$ we choose. Indeed we define:
$$a^n = \prod_{k=1}^na$$
And even if $a$ is negative this has a meaning. Then we extend the definition for $n \in \Bbb Z$ and for $n \in \Bbb Q$. When we are to define to $n \in\Bbb R$ we define it as:
$$a^x=e^{x\ln a}$$
That's fine, but $\ln $ is a function defined on $\Bbb R^+$ so that if we try to compute $(-5)^\pi$ we'll get into trouble because this would be:
$$(-5)^{\pi}=e^{\pi \ln(-5)}$$
But $\ln (-5)$ is undefined. In that case, the function that $f : A \subset \Bbb R^2 \to \Bbb R$ given by $f(a,x) = a^x$ would be undefined if $a < 0$, so that $A = \Bbb R^+ \times \Bbb R$. What I thought was: we can extend this function when $a$ is negative and $x$ is rational. In that case we would set it to the old definition of exponentiation, since we would have a real raised to a rational.
So, for negative base and irrational exponent the exponential remains undefined?
Thanks very much in advance!
AI: It depends what number system you are working in. In the reals, there is no answer. In the complex numbers, there are many answers. You can take the principal branch of the logarithm function, just saying that the imaginary part of the result will be in the range $(-\pi,\pi]$, in which case it is well defined. But if $e^z=w$, it is also true that $e^{z+2i\pi k}=w$ for $k \in \Bbb Z$ so you could say $\log w=z + 2i \pi k$ |
H: Interpolation between iterated logarithms
I am investigating the family of functions $$\log_{(n)}(x):=\log\circ \cdots \circ \log(x)$$
Is there a known smooth interpolation function $H(\alpha, x)$ such that $H(n,x)=\log_{(n)}(x)$ for $n\in\mathbb{N}$?
AI: Your function $H(\alpha,x)$ is the same as $\exp^{-\alpha}(x)$ in the notation of wikipedia, and as such it has been widely studied. Two good sources are the Citizendium page and also this great paper by Dmitri Kouznetsov, whose results I have managed to reproduce.
D. Kouznetsov (July 2009). "Solution of $F(z+1)=\exp(F(z))$ in complex z-plane". Mathematics of Computation 78 (267): 1647–1670.
The function he studies is $F(-\alpha)=H(\alpha,1)$ in your notation. Moreover, $H(\alpha,x)$ for different $x$ are just shifted versions of each other, in the sense that $H(\alpha,y)=H(\beta+\alpha,1)$ for some $\beta$ dependent on $y$. This is because if there is some $\beta$ such that $H(0,y)=H(\beta,1)$, then if I define $H'(\alpha)=H(\beta+\alpha,1)$, we have $H'(0)=H(0,y)$ and $H'(\alpha+1)=\log H'(\alpha)$, so $H'$ satisfies the same functional equations as $H$ and thus is the same function (assuming the additional translation-invariant regularity properties, which means analyticity and an asymptotic form as $\alpha\to i\infty$, see Kouznetsov's paper for details).
Over the reals, the fact that $H(0,y)=H(\beta,1)$ has a solution for any $y$ follows from the fact that the range of $H(\beta,1)$ for $\beta\in(-2,\infty)$ is all of ${\Bbb R}$, since $\lim_{\beta\to-2}H(\beta,1)=-\infty$ and $\lim_{\beta\to\infty}H(\beta,1)=\infty$. Over the complexes, it follows from Picard's little theorem, since $H$ is non-constant. |
H: This is correct? it is my project
Integral definite, The area of [a,b] where a=-2.5 and b=2.5, by rectangles I have a similar area, is the arch of a bridge(model) .
$$ y=-\frac{4}{25}x^2+1 $$
$$ \int_{-2.5}^{2.5}(-\frac{4}{25}x^2+1)dx$$
$$-\int_{-2.5}^{2.5}\frac{4}{25}x^2dx+\int_{-2.5}^{2.5}dx$$
$$-\frac{4}{25}\frac{x^{2+1}}{2+1}+x$$
$$-\frac{4}{25}\frac{x^{3}}{3}+x$$
$$-\frac{4}{75}x^{3}|_{-2.5}^{2.5}+x|_{-2.5}^{2.5}$$
$$[-\frac{4}{75}(2.5)^{3}+(2.5)]-[-\frac{4}{75}(-2.5)^{3}+(-2.5)]$$
$$[-\frac{4}{75}(15.625)+2.5]-[-\frac{4}{75}(-15.625)-2.5]$$
$$[-\frac{62.5}{75}+2.5]-[\frac{62.5}{75}-2.5]$$
$$[-\frac{62.5}{75}+\frac{187.5}{75}]-[\frac{62.5}{75}-\frac{187.5}{75}]$$
$$[\frac{125}{75}]-[-\frac{125}{75}]$$
$$\frac{125}{75}+\frac{125}{75}$$
$$\frac{250}{75}u^2\approx 3.3333 u^2$$
scale: $$1u=\frac{110}{9}cm$$
$$1u^2=\frac{12100}{81}cm^2$$
real scale: $$1cm^2=4225m^2$$
I'm doing this for a school project, I need to show the use of integrals in real life. So I made a scale model of a bridge, so I have this function: -(4/25x^2 + 1) for the quadratic function. the only I need to know if my scale and my maths are right so I can continue this problem.
Si pueedes en español mejor jaja
AI: This is Evariste's answer translated into Spanish.
La integral que has computado es correcta, pero tal vez te saldria mejor si incluyes los limites de integracion antes de evaluar la integral, es decir, no dejarla en forma indefinida asi: $$\frac{4}{25}\frac{x^{2+1}}{2+1}+x$$
Si dudaste que todo lo que has hecho este bien, puedes aprovecharte de WolframAlpha. Es un sitio web muy util que puedes usar para verifiar tus computaciones.
Desde el punto de vista de estrategio, visto que has separado las integrales ya en la suma de dos integrales (las que son mas facil de evaluar), ?porque las agregaste otra vez cuando llegaste a la etapa de evaluacion? Sugiero que trabajes mas en computar integrales y que adoptes una manera sistematica para ello. Eso te va a ahorrar muchisimo tiempo.
La cifra final debe ser 51122500/81$m^2$, o aproximadamente 631141.98$m^2$.
Un saludo. |
H: Gaussian linking coefficient definition
As I read from the wikipedia page of the linking number, it says that the linking number of two curves $\gamma_1$ and $\gamma_2$ in space can be found using the integral
$$\,\frac{1}{4\pi}
\oint_{\gamma_1}\oint_{\gamma_2}
\frac{\mathbf{r}_1 - \mathbf{r}_2}{|\mathbf{r}_1 - \mathbf{r}_2|^3}
\cdot (d\mathbf{r}_1 \times d\mathbf{r}_2)$$
It says the integrand is the Jacobian of the Gaussian map
$$\Gamma(s,t) = \frac{\gamma_1(s) - \gamma_2(t)}{|\gamma_1(s) - \gamma_2(t)|}$$
Following Ted's comment I am able to show that
$$\oint_{\gamma_1}\oint_{\gamma_2}
\left\|\frac{\partial\Gamma}{\partial s}\times \frac{\partial\Gamma}{\partial t}\right\|\,dsdt =
\oint_{\gamma_1}\oint_{\gamma_2}
\frac{1}{|\mathbf{r}_1 - \mathbf{r}_2|^3}
\left|
(\mathbf{r}_1 - \mathbf{r}_2)\cdot \left (\frac{d\mathbf{r}_1}{ds} \times \frac{d\mathbf{r}_2}{dt}\right)\right| ds dt$$
AI: This is (up to sign) the integral of the pullback of the area $2$-form on the sphere by $\Gamma$. We get the same thing by pulling back the $2$-form
$$\omega = \frac{x\,dy\wedge dz + y\,dz\wedge dx + z\,dx\wedge dy}{\|\mathbf x\|^3}$$ by the unnormalized chord map $\tilde\Gamma(s,t)=\gamma_1(s)-\gamma_2(t)$.
Note that when we evaluate the pullback of the numerator, we get the determinant of the matrix with columns $\mathbf r_1-\mathbf r_2$, $\mathbf r_1'$, $-\mathbf r_2'$, which is $-(\mathbf r_1-\mathbf r_2)\cdot(\mathbf r_1'\times\mathbf r_2')$.
Indeed, I believe Wikipedia has the sign wrong — Gauss's original paper writes what is equivalent to
$$\frac1{4\pi}\oint_{C_1}\oint_{C_2}\frac{\mathbf r_2-\mathbf r_1}{\|\mathbf r_2-\mathbf r_1\|^3}\cdot (d\mathbf r_1\times d\mathbf r_2)\,.$$
As often happens, the formula in Wikipedia appears in various sources. I think a lot of people, when they check the signs in their heads with a typical $+1$ knot, must forget the negative sign when they compute $\partial\Gamma/\partial t$.
More classically, to compute the Jacobian of the map, you need to calculate $$\left\|\frac{\partial\Gamma}{\partial s}\times \frac{\partial\Gamma}{\partial t}\right\|\,.$$
More precisely, we want the signed area of the map $\Gamma$ (keeping track of how the chord map covers the sphere, with signed orientation), so we erase the absolute value signs you have at the end of your calculation. (For example, given two complicated curves that are not linked, the chord map covers portions of the sphere—perhaps all of it—numerous times, but then hits the same regions with opposite orientation.) |
H: Limit $\lim\limits_{n\to\infty}\left(1+\frac{1}{n}+a_n\right)^n=e$ if $\lim\limits_{n\to\infty}na_n=0$
Let $\{a_n\}$ be any sequence of real numbers such that $\lim_{n\rightarrow\infty}na_n=0$. Prove that $$\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}+a_n\right)^n=e$$
I thought about using binomial theorem. So $\left(1+\dfrac{1}{n}+a_n\right)^n = \left(1+\dfrac{1}{n}\right)^n + na_n\left(1+\dfrac{1}{n}\right)^{n-1} + \dbinom{n}{2}a_n^2\left(1+\dfrac{1}{n}\right)^{n-2}+\ldots$.
The first term has limit $e$, the second term has limit $0$ (because $na_n$ has limit $0$, and $\left(1+\dfrac{1}{n}\right)^{n-1} = \left(1+\dfrac{1}{n}\right)^{n}\left(1+\dfrac{1}{n}\right)^{-1}$ has limit $e$.) But for the other terms, it seems hard to find the limit.
AI: $$\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}+a_n\right)^n$$
$$= \left(\lim_{n\rightarrow\infty}\left(1+\frac{1+n\cdot a_n}n\right)^{\frac n{1+n\cdot a_n}}\right)^{\lim_{n\rightarrow\infty}(1+n\cdot a_n)}$$
$$= \left(\lim_{y\rightarrow\infty}\left(1+\frac1y\right)^y\right)^{\lim_{n\rightarrow\infty}(1+n\cdot a_n)}$$
$$=e^1$$
as $\lim_{n\rightarrow\infty} n\cdot a_n=0,n\rightarrow\infty \implies y=\frac n{1+n\cdot a_n}\to\infty$ |
H: limit $a_{n+1}/a_n$ for recurrence $a_{n+2}=a_{n+1}+a_n$
Let $\{a_n\}$ be a positive sequence which satisfies $a_{n+2}=a_{n+1}+a_n$ for $n=1,2,\ldots$. Let $z_n=a_{n+1}/a_n$. How can I prove that $\lim_{n\rightarrow\infty}z_n$ exists?
I looked at $z_{n+1}=1+1/z_n$, but I still don't know how to go from here.
AI: Corrected version.
By hypothesis the $z_n$ are positive, so the recurrence
$$z_{n+1}=1+\frac1{z_n}$$
tells you that $z_n>1$ for $n\ge 2$. It follows that $1<z_n<2$ for $n\ge3$. If the sequence converges, it must converge to some $L\in[1,2]$. Moreover, that $L$ must be a fixed point of the function
$$f(x)=1+\frac1x\;:$$
$f$ is continuous, so it must satisfy
$$L=\lim_{n\to\infty}z_{n+1}=\lim_{n\to\infty}f(z_n)=f(L)\;.$$
Solving
$$L=1+\frac1L$$
for the unique solution in the interval $[1,2]$, we see that the only possible limit is
$$\varphi=\frac{1+\sqrt5}2\approx1.618\;.$$
Note that if $z>\varphi$, then $$f(z)=1+\frac1z<1+\frac1\varphi=\varphi\;,$$ while if $z<\varphi$, then $$f(z)=1+\frac1z>1+\frac1\varphi=\varphi\;.$$ That is, the sequence is alternately above and below $\varphi$.
Suppose that $1<a\le x<y$. The mean value theorem says that there is a $u\in[x,y]$ such that
$$\left|\frac{f(x)-f(y)}{x-y}\right|=|f\,'(u)|=\frac1{u^2}\le\frac1{a^2}\;.\tag{1}$$
You can easily check that if $1<z<2$, then $\frac32<f(z)<2$, so $z_n>\frac32$ for $n\ge 4$. We may therefore take $a=\frac32$ in $(1)$ to see that for each $n\ge 4$,
$$\left|\frac{z_{n+2}-z_{n+1}}{z_{n+1}-z_n}\right|\le\frac1{(3/2)^2}=\frac49$$
and hence
$$|z_{n+2}-z_{n+1}|\le\frac49|z_{n+1}-z_n|\;.$$
To finish, just combine this with the fact that the terms are alternatively larger and smaller than $\varphi$ to see that if $z_n<\varphi$, then
$$z_n<z_{n+2}<z_{n+4}<\ldots<\varphi<\ldots<z_{n+5}<z_{n+3}<z_{n+1}$$
with $\lim\limits_{k\to\infty}(z_{n+2k+1}-z_{n+2k})=0$: this shows that the sequence converges to $\varphi$. |
H: The identifications of $R$ in its ring of fractions $S^{-1}R$
If $R$ is an integral domain, we can identify the elements $r\in R$ as elements $rs/s$ of the ring of fractions $S^{-1}R$. In this way, we can identify $r\in R$ as $r/1_R$. I've seen in somewhere that we can have a more general case identifying $r$ as $r/u$, where $u$ is a unit, is that true?
Thanks in advance
AI: The relevant result: $\tau: R \to S^{-1}R \ , \ r\mapsto r/1$ is injective if $S$ doesn't contain zero-divisors of $R.$
Proof: Suppose $r/1 = r'/1.$ By definition, this is true if and only if there exists $s\in S$ such that $s(r\cdot 1 - r'\cdot 1)= s( r-r')=0.$ But $S$ has no zero divisors, so we must have $r=r'.$
You can modify this proof to show that the map $\sigma: R \to S^{-1}R \ , \ r \mapsto r/u$ (where $u$ is a unit of $R$ contained in $S$) is injective if $S$ doesn't contain zero-divisors of $R.$ |
H: Proper map on from compact manifolds
I'm stuck on this statement. Could anyone please help me out?
Let $X$ be a compact manifold, every map $f: X \longrightarrow Y$ is proper.
The definition of proper: a smooth map between manifolds is called proper if inverse images of compact subsets are compact.
I know that continuous maps map compact sets to compact sets. But this seems to be the converse of that... Is there anything that I'm missing here? Thanks!
AI: In $Y$, compact sets are closed (assuming $Y$ is Hausdorff). $f$ is continuous, so the inverse image of a closed set is closed. But a closed subset of a compact (Hausdorff) space is compact. So the inverse image of a compact set is compact. |
H: Fair and Unfair coin Probability
I am stuck on this question.
A coin with $P(H) = \frac{1}{2}$ is flipped $4$ times and then a coin with $P(H) = \frac{2}{3}$ is tossed twice. What is the probability that a total of $5$ heads occurs?
I keep getting $\frac{1}{6}$ but the answer is $\frac{5}{36}$.
Attempt: $P($all heads on the four coins$)P($either one of the tosses is heads on the two coins$)+P(3$ heads on the four coins$)P($both coins are heads$)$
$P($all heads on the four coins$) = \left(\frac{1}{2}\right)^4 = \frac{1}{16}$.
$P($either one of the tosses is heads on the two coins$) = 1-P($no heads on both tosses$) = 1-\frac{1}{3}\cdot\frac{1}{3} = \frac{8}{9}$.
$P($exactly $3$ heads on the four tosses$) = \frac{1}{4}$.
$P($both coins are heads$) = \frac{2}{3}\cdot\frac{2}{3} = \frac{4}{9}$.
Final Equation: $\frac{1}{16}\cdot\frac{8}{9}+\frac{1}{4}\cdot\frac{4}{9} = \frac{1}{6}$.
Why am I off by $\frac{1}{36}$?
AI: The required probability will be
$P($exactly $4 $ heads from the $4$ flips of $1$st coin$)\cdot P($exactly $1 $ head from the $2$ flips of $2$nd coin $)+$
$P($exactly $3 $ heads from the $4$ flips of $1$st coin$)\cdot P($exactly $2 $ heads from the $2$ flips of $2$nd coin $)$
Using Binomial Distribution, the required probability
$$\binom44\left(\frac12\right)^4\left(1-\frac12\right)^{4-4} \cdot\binom21\left(\frac23\right)^1\left(1-\frac23\right)^{2-1}$$
$$+\binom43\left(\frac12\right)^3\left(1-\frac12\right)^{4-3} \cdot\binom22\left(\frac23\right)^2\left(1-\frac23\right)^{2-2}$$
$$=\frac1{16}\cdot\frac49+\frac14\cdot\frac49=\frac5{36}$$ |
H: computing recursive functions
I have a function $\alpha : \mathbb{N}\times\mathbb{N} \rightarrow\mathbb{N}$, defined recursively, as below:
$\forall n \in\mathbb{N}, \alpha(n,10) := \begin{cases} \alpha(n-1-9, 10) + 9 &\text{if}\ n \ge 10,\\ 0 &\text{if}\ n \lt 10 \end{cases}$.
When I compute $\alpha(10, 10)$ my answer is $\alpha(0,10)+9$. When I compute $\alpha(100, 10)$ I get $\alpha(90,10)+9$. When I compute $\alpha(1000, 10)$, I get $\alpha(990,10)+9$.
Are my answers correct, or is there something I'm not doing or a way I'm supposed to phrase it? Additionally, I'm asked "What does function alpha calculate?" to which I answered
$\forall n \in\mathbb{N}$ such that $n \ge 10, a_n = \alpha(n-10,10)+9$
Am I in any way correct? I apologize if this seems like I'm just asking for answers, but I don't really have any resources to check if my answers are correct, and I'm struggling to keep up with a class in which I'm expected to learn discrete mathematics in six weeks, with four $2$-hour lectures and one exam per week. Thank you for any help in advance.
AI: First, since $10$ is the only value of the second argument that ever appears, and since $n-1-9=n-10$, you might as well get the clutter out of the way and look instead at the function
$$\beta:\Bbb N\to\Bbb N:n\mapsto\begin{cases}
\beta(n-10)+9,&\text{if }n\ge 10\\
0,&\text{if }n<10\;:
\end{cases}$$
$\alpha(n,10)=\beta(n)$ for all $n\in\Bbb N$. Thus, for instance, the first value that you want is
$$\alpha(10,10)=\beta(10)=\beta(0)+9=9\;.$$
You stopped one step short of actually evaluating $\alpha(10,10)$ when you said that it’s $\alpha(0,10)$. This is correct as far as it goes, but it doesn’t go far enough: you’re supposed to come up with an actual numerical value.
Suppose that $10\le n\le 19$; then $0\le n-10\le 9$, so $\beta(n-10)=0$, and
$$\beta(n)=\beta(n-10)+9=0+9=9\;.$$
In other words, we now know that
$$\beta(n)=\begin{cases}
0,&\text{if }0\le n\le 9\\
9,&\text{if }10\le n\le 19\;.
\end{cases}$$
What about the next block of $10$? If $20\le n\le 29$, then $10\le n-10\le 19$, so
$$\beta(n)=\beta(n-10)+9=9+9=18\;.$$
Similarly, if $30\le n\le 39$, then $20\le n-10\le 29$, and
$$\beta(n)=\beta(n-10)+9=18+9=27\;.$$
We can now extend our partial description of $beta$:
$$\beta(n)=\begin{cases}
0,&\text{if }0\le n\le 9\\
9,&\text{if }10\le n\le 19\\
18,&\text{if }20\le n\le 29\\
27,&\text{if }30\le n\le 39\;.
\end{cases}$$
It appears that in each new block of $10$ consecutive inputs the output goes up by $9$. We might conjecture that
$$\beta(n)=9\left\lfloor\frac{n}{10}\right\rfloor$$
and try to prove it by induction on $n$. This turns out to be very easy: the induction step is just like the steps above in which I went from the tens block to the twenties and from the twenties block to the thirties. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.