text
stringlengths 83
79.5k
|
---|
H: Continiutity of a dense subset imply continuity of the set
I would like some help proving that if $f$ and $g$ are continous on $(a,b)$ and $f(x)=g(x)$ for every $x$ in a dense subset of $(a,b)$ then $f(x)=g(x)$ for all $x$ in $(a,b)$.
AI: Let $S\subset(a,b)$ be the dense subset in the question. Take any $x\in(a,b)$, and find a sequence $\{a_n\}$ in $S$ such that $a_n\to x$. Such a sequence exists precisely because $S$ is dense in $(a,b)$.
Now, we know that $f(a_n)=g(a_n)$ for all $n$ because they agree on $S$. We also know that $f$ and $g$ are continuous on $(a,b)$. By continuity, we have:
$$f(x)=\lim_{n\to\infty}f(a_n)=\lim_{n\to\infty}g(a_n)=g(x)$$ |
H: $\epsilon$-$\delta$ proof
I found an interesting problem in my textbook, it asks to prove the following statement:
If $f'(x_{0}) >0$, then there is a $\delta >0$ such that $f(x)<f(x_{0})$ if $x_{0}- \delta < x <x_{0}$, and $f(x) > f(x_{0})$ if $x_{0} <x < x_{0} + \delta$.
How would one prove this from the definitions?
AI: Since $f'(x_0) > 0$, we can find $\delta > 0$ so that:
$$
|x - x_0| < \delta \implies \frac{f(x) - f(x_0)}{x - x_0} > 0
$$
(why?)
Now, the quotient above is positive iff the numerator and denominator have the same sign. Split this into two cases and handle each case separately to get the desired result. |
H: Probability of coin toss there are 68% chance that % heads in the range 50% plus or minus SD?
A coin is tossed $2500$ times. There is about a $68\%$ chance that the percent of heads is in the range $50\%$ plus or minus ($0.5$, or $1$, or $1.5$, or $2$, or $2.5$)?
$\text{P(of coin tossed is 1/2)}=0.50 \pm \text{SD}$
$n=2500$
$μ=E[X]=x_1∗p_1+x_2∗p_2+x_3∗p_3+...+ x_n∗p_n$
$μ=1*(1/2) + (0*(1/2))=1/2$
The standard error is:
$EX=\sqrt{E[(X−μ)^2]}=P$
$=\sqrt{(x_1−μ)^2∗p_1+(x_2−μ)^2∗p_2+(x_3−μ)^2∗p_3+\cdots+(x_n−μ)^2∗p_n}$
$= \sqrt{(1-1/2)^2*(1/2) + (0-1/2)^2*(1/2)}$
$= \sqrt{2/8}$
$P= 0.5$
$σ=\sqrt{(p*(1 - p))/n}$
$σ=\sqrt{(0.5*(1-0.5))/2500}= \pm 1% = 0.01$
AI: Hint: If you let $x_1=1$ for heads and $x_2=0$ for tails and plug into the equations what do you get? What are the probabilities?
Added: Your added work is correct except that you have expressed $\sigma$ as a percentage without indicating it. The absolute standard deviation is $\sqrt {2500 \cdot 0.5 \cdot (1-0.5)}=25$. You have written $\sqrt {0.5 \cdot (1-0.5)/2500}$ which is $0.01=1\%$ |
H: Simplify $\sqrt n+\frac {1}{\sqrt n}$ for $n=7+4\sqrt3$
If $n=7+4\sqrt3$,then what is the simplified value of
$$\sqrt n+\frac {1}{\sqrt n}$$
I was taking LCM but how to get rid of $\sqrt n$ in denominator
AI: Hint: We have $\left(\sqrt n+\dfrac {1}{\sqrt n}\right)^2=n+2+\frac 1n$, then multiply top and bottom of the last by the conjugate. |
H: Finding specific alternative form of $\frac{(x-y)x+{y\over(y-z)}}{(x+y)z}=ab$
How does one approach; $$\frac{(x-y)x+{y\over(y-z)}}{(x+y)z}=ab$$ to find the form: $$-a b z (x+y) (y-z) = x^2 (-y)+x^2 z+x y^2-x y z-y$$
AI: $$\frac{(x-y)x+{y\over(y-z)}}{(x+y)z}=ab$$
take LCM on upper part
$$\frac{(x-y)(y-z)x+{y}}{(x+y)(y-z)z}=ab$$
$${(x-y)(y-z)x+{y}}=ab(x+y)(y-z)z$$
$$-({(x-y)(y-z)x+{y}})=-(abz(x+y)(y-z))$$
$$-({(xy-xz-y^2+yz)x+{y}})=-abz(x+y)(y-z)$$
$$-({x^2y-x^2z-xy^2+xyz+{y}})=-abz(x+y)(y-z)$$
$${-x^2y+x^2z+xy^2-xyz-y})=-abz(x+y)(y-z)$$
$${x^2(-y)+x^2z+xy^2-xyz-y})=-abz(x+y)(y-z)$$ |
H: Showing uniqueness of Riemann's Integral
I am given the definition: Le $f$ be defined on $[a,b]$. we say that $f$ is Riemann Integrable on $[a,b]$ if there is a number $L$ with the following property: for every $\epsilon>0$, there is a $\delta > 0$ such that $\left\|P\right\|< \delta$ implies $| \sigma -L| < \epsilon$ where $\sigma$ is the Riemann Sum of $f$ over the partition $P$ of $[a,b]$. In this case, we say that $L$ is the Riemann Integral of $f$ over $[a,b]$, and write $\int_{a}^{b} f(x) dx=L$
I am then asked to show why $L$ is a unique limit. It does make sence if not unique but how to show is well above me.
AI: Assume that $L_1$ and $L_2$ are the Riemann inttegrals of $f$ over $[a,b]$. We want to show that $L_1=L_2$. Let $\epsilon >0$. Then for each $i=1,2$, there exists $\delta_i>0$ such that
$$\|P \|<\delta_i \quad \Rightarrow \quad |\sigma-L_i|<\frac{\epsilon}{2}$$
whenever $P$ is a partition of $[a,b].$ Take $\delta$=min $\{\delta_1,\delta_2\}$. Fix a partition $P$ of $[a,b]$ and suppose $\|P\|<\delta$. Note that $\delta\le \delta_i$ for $i=1,2.$ Hence
$$0\le|L_1-L_2|\le|\sigma-L_1|+|\sigma-L_2|<\epsilon.$$
Since $\epsilon>0$ was arbitrary,
$$0\le|L_1-L_2|<\epsilon$$
holds for all $\epsilon >0.$ This forces us to conclude that $|L_1-L_2|=0.$ Hence, $L_1=L_2.$ |
H: Knowing that $n= 3598057$ is a product of two different prime numbers and that 20779 a square root of $1$ mod $n$, find prime factorization of $n$.
Knowing that $n= 3598057$ is a product of two different prime numbers and that 20779 is a square root of $1$ mod $n$, find prime factorization of $n$.
What I have done so far:
$n = p \cdot q$
$x^2 \equiv 1\pmod{n}$
$x^2 -1 \equiv 0\pmod{n}$
$(x-1)(x+1) \equiv 0\pmod{n}$
$x-1 = 20779 \lor x + 1=20779$
I have also noticed that:
$(x-1)(x+1) \equiv 0\pmod{p \cdot q}$
$(x-1)(x+1) \equiv 0\pmod{p} \land (x-1)(x+1) \equiv 0\pmod{q}$
But I have no idea what to do next. Any hints?
AI: From $(x-1)(x+1)\equiv 0 \pmod n$ you cannot conclude that $x-1 = 20779 \lor x + 1=20779$ when $n$ is not prime. You can conclude that $x-1 \equiv 0 \pmod p$ and $x+1 \equiv 0 \pmod q$ (or the other way-we could swap $p,q$) when you know that $n$ has only two prime factors unless $pq$ divides one of $x+1, x-1$. So factor $20778$ and $20780$ looking for factors that will multiply to make $3598057$. The other factors are small enough to find by hand. |
H: When does a matrix $A$ with ones on and above the diagonal have $\det(A)=1$?
What conditions, if they're even necessary, must be placed on $\star$ so that the matrix
$$ \begin{pmatrix} 1 & & \huge{1} \\ & \ddots & \\ \huge{\star} & & 1 \end{pmatrix}, $$
so that $\det{(A)}=1$, where $\huge{1}$ denotes "all entries above the diagonal are $1$'s," and $\huge{\star}$ is just some arbitrary scalar entries from a general field $F$? My intuition tells me that that no matter what it's $1$.
EDIT:
More specifically, disregard my call for conditions, but assume rather that the lower diagonal entries are $\color{red}{\text{guaranteed to be less than $1$}}$.
AI: Let $$A = \begin{pmatrix} 1 & & &\huge{1} \\ a_1 & \ddots \\ & \ddots & \ddots \\ \huge{\star}& & a_{n-1} & 1 \end{pmatrix}$$ be your matrix. I claim that $\det(A) = \prod_{i=1}^{n-1}(1-a_i)$.
To see why this is true, note that by expanding the determinant in the first column, we get $$\det(A) = \begin{vmatrix}1 & & &\huge{1} \\ a_2 & \ddots \\ & \ddots & \ddots \\ \huge{\star}& & a_{n-1} & 1\end{vmatrix} - a_1\begin{vmatrix}1 & & &\huge{1} \\ a_2 & \ddots \\ & \ddots & \ddots \\ \huge{\star}& & a_{n-1} & 1\end{vmatrix}+0+0+\cdots+0,$$
since the first two rows are equal except for their first entries.
Hence $\det(A) = (1-a_1)\begin{vmatrix}1 & & &\huge{1} \\ a_2 & \ddots \\ & \ddots & \ddots \\ \huge{\star}& & a_{n-1} & 1\end{vmatrix}$ and we get our result by induction.
Thus the restriction you are looking for is $\prod_{i=1}^{n-1} (1-a_i) = 1$, where $a_1,\ldots,a_{n-1}$ is the lower diagonal of your matrix. |
H: Expressing $\sqrt{n +m\sqrt{k}}$
Following this answer, is there a simple rule for determining when:
$$\sqrt{n +m\sqrt{k}}$$
Where $n,m,k \in \mathbb{N}$, can be expressed as:
$$a + b\sqrt{k}$$
For some natural $a,b$?
This boils down to asking for what $n,m,k \in \mathbb{N}$ there exist $a,b\in \mathbb{N}$ such that:
$$2ab= m,\ \ \text{and}\ \ a^2+b^2k = n$$
AI: Hint: Please see this link. To translate it to your problem, note that $ b = m^2k $. From here, one of the square roots on the right hand side of their expression must reduce.
In the case that $ a + \sqrt{a^2 - b} = 2p^2, p \in \mathbb{N} $, $ p^2 = \frac{a + \sqrt{a^2 - b}}{2} $. For the second case $ q^2 = \frac{a - \sqrt{a^2 - b}}{2} $. Hence, if $ \frac{a \pm \sqrt{a^2 - b}}{2} $ is a perfect square then the nested square root is reducible.
For example, let $ a = 30, b = 896 $. In this case, $ \frac{a + \sqrt{a^2 - b}}{2} = 16 $ so $ \sqrt{30 + \sqrt{896}} $ is reducible, being equal to $ 4 + \sqrt{14} $. Also, note that if $ a = 30, b = 756 $, it is reducible because $ \frac{a + \sqrt{a^2 - b}}{2} = 9 $, being equal to $ 3 + \sqrt{21} $. |
H: Matrix $BA\neq$$I_{3}$
If $\text{A}$ is a $2\times3$ matrix and $\text{B}$ is a $3\times2$ matrix, prove that $\text{BA}=I_{3}$ is impossible.
So I've been thinking about this, and so far I'm thinking that a homogenous system is going to be involved in this proof. Maybe something about one of the later steps being that the last row of the matrix would be $0\neq \text{a}$, where a is any real number. I've also been thinking that for a $2\times3$ matrix, there is a (non-zero) vector $[x,y,z]$ such that $\text{A}[x,y,z]=[0,0]$ because the dot product could possibly yield $0$. I'm not sure if that's helpful at all though.
Trouble is I'm not really too sure how to continue, or even begin.
Any help would be appreciated.
AI: Consider the possible dimension of the columnspace of the matrix $BA$. In particular, since $A$ has at most a two-dimensional columnspace, $BA$ has at most a two-dimensional columnspace. Stated more formally, if $A$ has rank $r_a$ and $B$ has rank $r_b$, then $BA$ has rank at most $\min\{ r_a, r_b \}$. |
H: How is Brownian motion predictable?
Could someone please explain how Brownian motion is predictable? My understanding is that a predictable process is one that depends on information up to time t say but not t itself, therefore W_t has to be observed at time t to determine its value.
AI: Brownian motion is continuous in time so you can determine the value of $W_t$ as a limit from the left: $W_t=\lim_{s\rightarrow t^-}W_s$. |
H: Dual basis and annihilator problem
I think they're fairly simple but I really don't know where to start, the problems are these:
First one:
$V$ a vector space of dimension $n$ and $\phi \in V^* \setminus \{0\}$.
Prove that $\dim \ker \phi = n-1$.
Second one:
Let $B = \begin{bmatrix}2 & -2 \\ -1 & 1\end{bmatrix}$ and $W = \{A \in \operatorname{Mat}_{\mathbb R}(2\times2) : AB = 0\}$. Suppose $f \in W^o$ (the annihilator of $W$) such that $f(\operatorname{Id}_{2\times2}) = 0$ and $f\left({\scriptstyle\begin{bmatrix}0 & 0 \\ 0 &1\end{bmatrix}}\right) = 3$. Calculate $f(B)$.
For this is one I don't really know when I have to use that $AB = 0$.
AI: For the first question, recall the Rank-Nullity Theorem: for any linear operator $T:V\to W$ between two finite dimensional vector spaces the following equality holds: $\dim\text{ker}\ T+\dim\text{Im}\ T=\dim V$. If $\phi\in V^*\setminus\{0\}$, then $\phi:V\to\mathbb{F}$, where $\mathbb{F}$ is the field over which you consider $V$ and $W$. The dimension of $W=\mathbb{F}$ over $\mathbb{F}$ is 1. Now use the theorem.
Concerning the second question, notice that $W$ is the subspace of matrices of the form:
$$\begin{bmatrix}{}
a & 2a\\
c & 2c
\end{bmatrix}$$
where $a,c\in\mathbb{R}$. Thus $\dim W=2$. As $f(Id)=0$, $Id\not\in W$ and $f\neq0$, we have that $\ker\ f=W\oplus\text{span}\{Id\}$. Now we have:
$$f(B)=f\begin{bmatrix}{}
2 & -2\\
-1 & 1
\end{bmatrix}=f\begin{bmatrix}{}
-1 & -2\\
0 & 0
\end{bmatrix}+f\begin{bmatrix}{}
0 & 0\\
-1 & -2
\end{bmatrix}+f\begin{bmatrix}{}
3 & 0\\
0 & 3
\end{bmatrix}$$
The first and the second matrices from RHS are in W, the third one is the multiplication of $Id$. So $f(B)=...$ |
H: Find the angle between the 2 points (50.573,-210.265) and (117.833,-80.550)
I am attempting to find the angle between the 2 points (50.573,-210.265) and (117.833,-80.550).
Is my calculation correct because a program is giving me a different answer? It says the angle is 27'24'27.27 DMS
dx = x2 - x1;
dy = y2 - y1;
angle = Atan2(dy,dx) * 180 / PI;
angle = 62.59242
My result is 62.59242 degrees but the program says its 27'24'27.27 DMS. Which is correct?
Also if the points were 3d would that affect the angle? Or does the z axis not impact on the horizontal angle?
AI: I agree with your calculation. It appears they are defining it with the arguments switched. The fact that you answer is $90-$theirs supports this. Note that there is no angle between two points. What you have is the angle between the line through those points and the $x$ axis. |
H: inner product space and gram matrix
I have a question from my proffesor that I can not figure it out.
V will be inner product system above R2.
Let E some basis with the gram matrix (E={v1,v2})
This is the gram matrix:
\begin{pmatrix}
2 & -1\\
-1 & 1\\
\end{pmatrix} =
\begin{pmatrix}
{(v_{1},v_{1})} & {(v_{1},v_{2})}\\
{(v_{2},v_{1})} & {(v_{2},v_{2})}\\
\end{pmatrix}
Let T:V->V be a linear map with a matrix represent T according to basis E
\begin{pmatrix}
1 & 2\\
2 & 1\\
\end{pmatrix}
is T is self adjoint ?
well first of all according to gram matrix it is very easy to see that E is not a orthonormal set and therefore T* is necessarily equal to the matrix of T with a transpose and conjunction.
So where can I go from here because I can not assume that the inner product is the standard one and therefore I do not know How to look for it, if exists ?
and if it is not exist how can I proof that if I do not know inner product and by that I can not find the adjoint
AI: Let us call $P$ the matrix of the basis $E$ expressed in an orthonormal basis, so that
$$
P^*P=\pmatrix{2&-1\\-1&1}=Q
$$
and, if $A$ denotes the matrix of $T$ in the same orthonormal basis,
$$
P^{-1}AP=\pmatrix{1&2\\2&1}=B.
$$
So the question amounts to: do we have $A^*=A$? Now, note that $B^*=B$ and
$$
A^*=A\;\iff\;(PBP^{-1})^*=PBP^{-1}\;\iff\; (P^*)^{-1}BP^*=PBP^{-1}\;\iff\;BQ=QB.
$$
I let you check whether the latter is true. |
H: Inclusions between $H^1_0(\Omega) \cap H^2(\Omega)$ and $H^2_0(\Omega)$
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$. I would like to understand something about the relationships between these two spaces: $$H^1_0(\Omega) \cap H^2(\Omega)\quad \text{and}\quad H^2_0(\Omega).$$
Is there any inclusion? If there is not, could you exhibit a counterexample?
Thank you!
AI: Using the definitions, you should be able to prove that $H^2_0(\Omega) \subset H_0^1(\Omega) \cap H^2(\Omega)$. The reverse inclusion does not hold; as a counterexample, try picking a really nice $\Omega$ (such as a disc) and writing down a well-behaved function that vanishes at the boundary, but whose first derivatives do not. Then such a function will be in $H_0^1(\Omega) \cap H^2(\Omega)$ but not $H_0^2(\Omega)$. |
H: Critical points in multivariable calc
Find the critical points of
$z = x^{3} + 3xy^{2} - 3x^{2} - 3y^{2} + 7$
I understand if it was $f(x,y)$ but this z is really throwing me off..
I could take the partial derivs of x and y, but if I take the partial of z I get 0=0?
EDIT after comment. Ok, so I will take the partial derivatives.
$Fx = 3x^{2} + 3y^{2} - 6x$
|
$Fy = 6xy - 6y$
Where would I go from here in obtaining the critical points.. solve for x and y @ 0?
AI: Don't let the $z$ confuse you. It's in the same spirit as denoting $y=f(x)$ for some function of one variable. In single variable calculus, if we have some polynomial like
$$
y=f(x)= x^3 - 2x + 3
$$
then we can either write
$$
f'(x) = 3x^2 - 2
$$
or
$$
\frac{dy}{dx} = 3x^2 - 2.
$$
Likewise, if we want to take the partial derivatives of $z=f(x, y) = x^3 + 3xy^2 - 3x^2 - 3y^2 +7$ with respect to $x$ and $y$, we can write
$$
f_x(x, y) = \frac{\partial z}{\partial x} = 3x^2 + 3y^2 - 6x
$$
$$
f_y(x, y) = \frac{\partial z}{\partial y} = -6y + 6xy.
$$
To find the critical points of $f$, simply find the points $(x, y)$ where $\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} = 0$. |
H: Proving $\frac{1}{2}(5x+4),\;2 < x,,\;\text{isPrime}(n)\Rightarrow n = 10k+7$
How is it possible to establish proof for the following statement?
$$n = \frac{1}{2}(5x+4),\;2<x,\;\text{isPrime}(n)\;\Rightarrow\;n=10k+7$$
Where $n,x,k$ are $\text{integers}$.
To be more verbose:
I conjecture that;
If $\frac{1}{2}(5x+4),\;2<x$ is a prime number, then $\frac{1}{2}(5x+4)=10k+7$
How could one prove this?
AI: If $n$ is prime , clearly $x$ must be even and moreover $x$ has only one power of 2, eg $x=2y$ where $y$ is odd, $y=2z+1$. Thus we have
$$n\equiv 5y+2 \pmod{10}\equiv 5+2\equiv 7$$ |
H: Valid Proof that the Irrationals are Uncountable?
So I originally wanted to prove that the reals are uncountable, but the best solution I came up with was to prove the irrationals are uncountable so therefore the reals must be as well. I suppose my first question is, is this valid logic?
Next take any countable subset of $\mathbb{R} \backslash \mathbb{Q}$, put it as $S$. Since it's countable $\exists \; a : S \to \mathbb{N}$ whereas $a$ is bijective. Because it's bijective, $\exists \; b : \mathbb{N} \to S$ similarly bijective whereas $b$ is the inverse of $a$. Now let's form an irrational, put it as $s$. Start with the number defined as '$0.$'. The $n$th place after the decimal place the $n$th digit of $b_n$ with one added to it (if the $n$th digit of $b_n$ is $9$ put it to be $0$). It's clear that this number $s$ is not in the set $S$ so it follows that any countable subset of $\mathbb{R} \backslash \mathbb{Q}$ is a strict subset, so $\mathbb{R} \backslash \mathbb{Q}$ must be uncountable (otherwise it would apparently be a strict subset of itself...).
Now since $\mathbb{R} \backslash \mathbb{Q}$ is uncountable it follows that $\mathbb{R}$ is uncountable.
P.S. I tag this as topology because it's introduced in "basic topology" in Rudin
EDIT: as William and Thomas pointed out this isn't exactly valid, there is no guarantee that $s \in \mathbb{R} \backslash \mathbb{Q}$. So we can say let $S$ be a countable subset of $\mathbb{R}$ so $\exists \; a : S \to \mathbb{N}$ and similarly the inverse of $a$, $b : \mathbb{N} \to S$ and so form $s$ as the number $0.$ with the $n$th digit after the decimal point to be either the $n$th digit of $b_n$ (extending the number with as many $0$'s as needed) plus one or if that number turns out to be $10$, place $0$ there. Then it's clear that $s \not\in S$ so every countable subset is a strict subset so $\mathbb{R}$ must be uncountable. It then must follow that $\mathbb{R} \backslash \mathbb{Q}$ is uncountable since $\mathbb{Q}$ is countable, and taking a countable number of things from an uncountable number of things will clearly result in an uncountable number of things.
AI: This is essentially Cantor's diagonalization argument.
If you came up with this by yourself, that is really impressive. It is widely regarded to be one of the most beautiful proofs in mathematics. |
H: What is the difference between regulator and stabilization
What is the difference between regulator and stabilization in control theory
don't they both minimize the disturbance to the system?
could answer be elaborated from the view of state and output?
AI: If I understand your question properly, we have two things to consider.
Stability
This is the ability for a system to operate under a variety of conditions without self destructing. There are two categories of interest; 1) the ability for the system to return to equilibrium after an initial displacement away from equilibrium and, 2) the ability for the system to produce a bounded output for a bounded input. There are differences between time-varying and nonlinear systems.
Regulator
In this system the reference input is zero (since it is missing). It is desired to keep the output as near to zero as possible in the system.
So, in both cases, we certainly want them to remain stable, but one allows two types of inputs, while the other wants to have a constant zero input. One can ask why they defined two different concepts for this and that is a valid question, but they needed a system that didn't have any inputs as opposed to the general case I suppose. |
H: Quotient rings of Gaussian integers $\mathbb{Z}[i]/(2)$, $\mathbb{Z}[i]/(3)$, $\mathbb{Z}[i]/(5)$,
I am studying for an algebra qualifying exam and came across the following problem.
Let $R$ be the ring of Gaussian Integers. Of the three quotient rings
$$R/(2),\quad R/(3),\quad R/(5),$$
one is a field, one is isomorphic to a product of fields, and one is neither a field nor a product of fields. Which is which and why?
I know that $2=(1+i)(1-i)$ and $5=(1+2i)(1-2i)$, so neither $(2)$ nor $(5)$ is a prime ideal of $R$. Then (I think) these same equations in $R/(2)$ and $R/(5)$, respectively, show that neither is an integral domain. Regardless, I can show that 3 is a Gaussian prime, hence $(3)$ is maximal in $R$ and $R/(3)$ is the field. But if I am correct about the others not being integral domains, I fail to see how either could be a product of fields.
I hope that this can be answered easily and quickly. Thanks.
AI: Now is a very good time for a quick foray into the ideal-theoretic version of Sun-Ze (better known as the Chinese Remainder Theorem). Let $R$ be a commutative ring with $1$ and $I,J\triangleleft R$ coprime ideals, i.e. ideals such that $I+J=R$. Then
$$\frac{R}{I\cap J}\cong\frac{R}{I}\times\frac{R}{J}.$$
First let's recover the usual understanding of SZ from this statement, then we'll prove it. Thanks to Bezout's identity, $(n)+(m)={\bf Z}$ iff $\gcd(n,m)=1$, so the hypothesis is clearly analogous. Plus we have $(n)\cap(m)=({\rm lcm}(m,n))$. As $nm=\gcd(n,m){\rm lcm}(n,m)$, if $n,m$ are coprime then compute the intersection $(n)\cap(m)=(nm)$. Thus we have ${\bf Z}/(nm)\cong{\bf Z}/(n)\times{\bf Z}/(m)$. Clearly induction and the fundamental theorem of arithmetic (unique factorization) give the general algebraic version of SZ, the decomposition ${\bf Z}/\prod p_i^{e_i}{\bf Z}\cong\prod{\bf Z}/p_i^{e_i}{\bf Z}$.
(How this algebraic version of SZ relates to the elementary-number-theoretic version involving existence and uniqueness of solutions to systems of congruences I will not cover.)
Without coprimality, there are counterexamples though. For instance, if $p\in\bf Z$ is prime, then the finite rings ${\bf Z}/p^2{\bf Z}$ and ${\bf F}_p\times{\bf F}_p$ (where ${\bf F}_p:={\bf Z}/p{\bf Z}$) are not isomorphic, in particular not even as additive groups (the product is not a cyclic group under addition).
Now here's the proof. Define the map $R\to R/I\times R/J$ by $r\mapsto (r+I,r+J)$. The kernel of this map is clearly $I\cap J$. It suffices to prove this map is surjective in order to establish the claim. We know that $1=i+j$ for some $i\in I$, $j\in J$ since $I+J=R$, and so we further know that $1=i$ mod $J$ and $1=j$ mod $I$, so $i\mapsto(I,1+J)$ and $j\mapsto(1+I,J)$, but these latter two elements generate all of $R/I\times R/J$ as an $R$-module so the image must be the whole codomain.
Now let's work with ${\cal O}={\bf Z}[i]$, the ring of integers of ${\bf Q}(i)$, aka the Gaussian integers. Here you have found that $(2)=(1+i)(1-i)=(1+i)^2$ (since $1-i=-i(1+i)$ and $-i$ is a unit), that the ideal $(3)$ is prime, and that $(5)=(1+2i)(1-2i)$. Furthermore $(1+i)$ is obviously not coprime to itself, while $(1+2i),(1-2i)$ are coprime since $1=i(1+2i)+(1+i)(1-2i)$ is contained in $(1+2i)+(1-2i)$. Alternatively, $(1+2i)$ is prime and so is $(1-2i)$ but they are not equal so they are coprime. Anyway, you have
${\bf Z}[i]/(3)$ is a field and
${\bf Z}[i]/(5)\cong{\bf Z}[i]/(1+2i)\times{\bf Z}[i]/(1-2i)$ is a product of fields.
Go ahead and count the number of elements to see which fields they are. However, ${\bf Z}[i]/(2)={\bf Z}[i]/(1+i)^2$ is not a field or product of fields, although the fact that its characteristic is prime (two) may throw one off the chase. In ${\bf Z}[i]/(1+i)^2$, the element $1+i$ is nilpotent. Since this ring has order four, it is not difficult to check that it is isomorphic to ${\bf F}_2[\varepsilon]/(\varepsilon^2)$, which is not a product of fields since $\epsilon\leftrightarrow 1+i$ is nilpotent and products of fields contain no nonzero nilpotents. |
H: what is the area of face of the cube in $m^2$
A fly is trapped inside a hollow cube. It moves from A to C along edges of cube, taking shortest possible route. It then comes back to A again along edges, taking longest route(without going over any point more than once). If the total distance traveled is 5040 meter , what is the area of face of the cube in $m^2$ ?
Please check following drawing , you will understand about A & C
AI: If $A$ is adjacent to $C$, then the shortest path is a single edge. The longest path visits all eight vertices, hence is seven edges long. Together that's eight edges; if the total distance is $5040$ meters, then each edges is 630 meters long, which makes each cube's face $630^2=396900$ square meters, and the total cube surface area $2381400$ square meters.
Other cases ($A$ on the same face as $C$ but not adjacent, $A$ opposite from $C$), I leave for you to figure out.
More details as per request:
Again, for the case where $A,C$ are adjacent, the longest path is what's called a Hamiltonian path for a cube graph. In fact, there's a Hamiltonian cycle, which returns back to the starting point. The start and end point of the Hamiltonian path are adjacent, so for this case we can take a tour of all the other vertices along the way. For the problem as given (where $A,C$ are on the same face but not adjacent), as @Guest points out you can't visit all the edges on the return trip. |
H: Are the graphs of these two functions equal to each other?
The functions are: $y=\frac{x^2-4}{x+2}$ and $(x+2)y=x^2-4$.
I've seen this problem some time ago, and the official answer was that they are not.
My question is: Is that really true?
The functions obviously misbehave when $x = -2$, but aren't both of them indeterminate forms at that point? Why are they different?
AI: $(1)$The first function is undefined at $x = -2$,
$(2)$ the second equation is defined at $x = -2$:
$$(x + 2) y = x^2 - 4 \iff xy + 2y = x^2 - 4\tag{2}$$ It's graph includes the entire line $x = -2$. At $x = -2$, all values of y are defined, so every point lying on the line $x = -2$: each of the form $(-2, y)$ are included in the graph of function (2). Not so with the first equation.
ADDED:
Just to see how well Wolfram Alpha took on the challenge:
Graph of Equation $(1)$: (It fails to show the omission at $x = -2$) But it does add:
Graph of Equation $(2)$:
Note: The pair of graphs included here do not match in terms of the scaling of the axes, so the line $y = x - 2$ looks sloped differently in one graph than in the other. |
H: Find the minimum of $|a+\frac 2 {a-1}|$ where $|a|\leq2$.
Find the minimum of $|a+\frac 2 {a-1}|$ where $|a|\leq2$.
I tried using differentiation, but the absolute makes things troublesome...
Please help. Thank you.
AI: Find the minimum of $a+\frac{2}{a-1}$ on the interval $(1,2]$, and the minimum of $-(a+\frac{2}{a-1})$ on the interval $[-2,1)$. Take the smaller of these two minima and you're done.
These intervals were determined by checking when the inside of the absolute value was positive vs. negative. |
H: Impossibility of polynomial approximation
This is exercise 12.6 in David Ullrich's Complex Made Simple. He has discussed many ways to prove the existence of polynomial approximations to functions in the complex plane, but not how to show such approximations are impossible in certain cases, which is the point of the problem. I have an idea, but I'm unsure about its validity.
Problem: Fix $M<0$. Let $f(z)$ be defined on $\mathbb C$ so that $f(0)=1$ and $f(z)=0$ for $z\neq 0$. Show there does not exist a sequence of polynomials $\{p_n\}$ such that $\lim_{n\rightarrow \infty} p_n(z) =f(z)$ if we require $|p_n(z)|<M$ for all $z$ with $|z|\le 1$ for every $n$.
Proposed Solution: let $\gamma$ be the unit circle, traversed once. Consider the rational functions $p_n/z$. Using the dominated convergence theorem, permissible because the $f(z)$ are uniformly bounded along $\gamma$, we have
$$\lim_{n\rightarrow \infty} \int_\gamma p_n(z)/z \ dz = \int_\gamma \lim_{n\rightarrow \infty} p_n(z)/z \ dz = \int_\gamma f(z)/z = 0.$$
But the limit on the left is just the limits of the residues of $p_n(z)/z$ at $0$, which is (modulo a $2\pi i$) the constant term in the power series expansion of $p_n(z)$ at $0$, or $p_n(0)$, And we are given this goes to $1$. We get $1=0$, and the contradiction shows the impossibility of the specified approximation.
Is this correct? And is there a better or more elementary way? Ullrich makes a point of avoiding theorems requiring measure theory to prove, like the dominated convergence theorem, so I suspect there is.
AI: How about this: The sequence $p_n$ is a uniformly bounded sequence of holomorphic functions on the (open) unit disc, and therefore a normal family (Montel's theorem). If they converge to $f$, they must converge uniformly on compact sets along some subsequence. But then $f$ would be continuous (indeed, holomorphic), and it isn't. |
H: Is this elementary number theory proof correct?
Let $A(n)$ be the number of primes less than $n$, divided by $n$ (so for example, $A(n) \leq 1$, as there cannot be more primes less than $n$ as there are integers less than $n$). Suppose that $n$ is a positive multiple of the positive integer $q$. Show that $$A(n) \leq \frac{q+(n-q)\frac{\phi({q})}{q}}{n}$$
This is what I have done.
First note that the above is equivalent to showing that (no. of primes $<n$) $\leq {q+(n-q)\frac{\phi({q})}{q}}$
Let $n=kq$
Therefore (no. of primes $<n$) $=$ (no. of primes $<kq$) $\leq \phi(kq)$ (remember $1$ is not prime)
$\phi(kq)=\phi(k)\phi(q) \leq (k-1)\phi(q)$ as $\phi(k) \leq (k-1)$.
$(k-1)\phi(q) < q+(k-1)\phi(q)=q+(qk-q)\frac{\phi(q)}{q}=q+(n-q)\frac{\phi(q)}{q}$ as required.
The reason I'm worried about this is that I have actually established a stricter inequality than in the question. However, I can't see anything wrong with this. Any reassurance or criticism would be much appreciated.
AI: You do not know that $(n/q,q)=(k,q)=1$, so you don't know that $\phi(qk)=\phi(q)\phi(k)$. Let me shine some light on a line of reasoning that involves just counting numbers.
Split the set $\{1,\cdots,kq\}$ into $k$ strings of consecutive integers,
$I_0=\{1,\cdots,q\}$
$I_1=\{q+1,\cdots,2q\}$
$I_2=\{2q+1,\cdots,3q\}$
$\cdots$
$I_{k-1}=\{(k-1)q+1,\cdots,kq\}$
Note that each interval contains exactly $q$ numbers and is a full system of representatives for the residues mod $q$. Anyway, let $A\subseteq\{1,\cdots,kq\}$ containing all of $I_0$, and for $i>0$, $A$ contains all those elements of $I_i$ that are coprime to $q$. Now, $iq+r$ is coprime to $n$ iff $r$ is coprime to $n$, so in fact $\#(I_i\cap A)$ is equal to the number of elements in $I_0$ coprime to $q$, which is $\phi(q)$. Hence
$$|A|=|I_0|+\sum_{i=1}^{k-1}|I_i\cap A|=q+(k-1)\phi(q).$$
Observe that $k-1=(n-q)/q$ and the set of all primes contained in the interval $\{1,\cdots,kq\}$ is a subset of $A$, so the $\#$ of primes is $\le |A|$. One motivation behind treating $I_0$ differently than $I_i$ for positive $i$ is that $q$ may be prime, but $q$ is not coprime to $q$, so $q\in I_0$ but $q$ is not in the subset of $I_0$ of elements coprime to $I_0$, suggesting $I_0$ requires special treatment.
You are right in thinking that this is a crude bound, though. (In fact, this bound grows linearly with $n$, as it is of the form $a+bn$, whereas $\pi(n)\sim n/\log n$ by the prime number theorem.) |
H: Solve a System with Variable
Given these matrices, how does one find two real solutions?
$dx/dt$ =
$\begin{bmatrix}
3 & -5\\
5 & 3
\end{bmatrix}x$
with $x(0) = \begin{bmatrix}
2\\
-3
\end{bmatrix}$
AI: Hints:
We are given the system:
$$x' = Ax = \begin{bmatrix}
3 & -5\\
5 & 3
\end{bmatrix}x$$
with IC:
$$x(0) = \begin{bmatrix} 2\\ -3 \end{bmatrix}$$
The general solution will be:
$$x_1(t) = c_2 \sin 4 t + c_1 (3 \sin 4 t + 4 \cos 4 t)$$
$$x_2(t) = c_1 \sin 4 t + c_2 (4 \cos 4 t - 3 \sin 4 t)$$
Using the IC, will yield the final solution of:
$$x_1(t) = \frac{21}{4} \sin 4 t + 2 \cos 4 t$$
$$x_2(t) = \frac{19}{4} \sin 4 t - 3 \cos 4 t$$ |
H: $n$-linear alternating form with $\dim{V}
Prove that every $n$-linear alternating form on a vector space of dimension
less than $n$ is the zero form.
AI: Let $k$ be a field, and let $V$ be a vector space over $k$, with $\dim V <n$. Let $\omega$ be an alternating $n$-form on $V$. Then $\omega : V^n \to k$ is an alternating, multilinear map. For general input $v_1,\ldots,v_n$, there exists a linear dependence $\sum c_i v_i=0$ among the $v_i$, by dimension of $V$. If $c_n \neq 0$ (e.g.), we can scale $c_n=1$. Then solving for $v_n$, we may write
$$\omega(v_1,\ldots,v_n)=\omega\left(v_1,v_2,\ldots,v_{n-1},-\sum_{i<n} c_iv_i\right)=-\sum_{i<n}c_i \omega\left(v_1,v_2,\ldots,v_{n-1},v_i\right),$$
in which the last step uses multilinearity. Since $\omega$ is alternating, switching the $i$th and $n$th inputs negates the value of $\omega$. Yet here $\omega(v_1,\ldots,v_{n-1},v_i)$ is clearly invariant under such a permutation, hence
$$\omega(v_1,\ldots,v_{n-1},v_i)=0$$
for all $i$. Thus $\omega$ is the zero map, as desired.
In general, we can show (by constructing a basis) that the dimension of the space of alternating $n$-forms on a $d$-dimensional space is given by $\binom{d}{n}$, which vanishes for $n > d$. |
H: Sufficient condition for irreducibility of polynomial $f(x,y)$
Suppose we have a non-constant polynomial $f(x,y)\in\mathbb{Q}[x,y]$ such that the following two conditions are satisfied:
1) For every $x_0\in\mathbb{Q}$, the polynomial $f(x_0, y)\in\mathbb{Q}[y]$ is irreducible.
2) For every $y_0\in\mathbb{Q}$, the polynomial $f(x,y_0)\in\mathbb{Q}[x]$ is irreducible.
My question is this:
Can we conclude from these two conditions that $f(x,y)$ is an
irreducible polynomial in $\mathbb{Q}[x,y]$?
I am leaning on "yes" side. This was my unsuccessful attempt: Suppose $f(x,y)$ is reducible, then $f(x,y)=g(x,y)h(x,y)$ for some non-constant polynomials $g, h\in\mathbb{Q}$. Now, for each fixed $x_0\in\mathbb{Q}$, we have $f(x_0,y)=g(x_0, y)h(x_0,y)$. Since $f(x_0, y)$ is irreducible, it means $g(x_0,y)$ or $h(x_0,y)$ must be constant. I think from here we should be able to conclude either $g$ or $h$ does not depend on $y$. Is this approach correct? Or perhaps there is a counter-example?
Thanks!
AI: No. Let $f(x, y) = (x^2 + 1)(y^2 + 1)$. |
H: How can I find these velocities without using the quadratic formula?
If a ball is thrown vertically upward with a velocity of $160 \text{ ft/s}$, then its height after t seconds is $s = 160t − 16t^2$.
a) What is the velocity of the ball when it is $384 \text{ ft}$ above the ground on its way up?
b) What is the velocity of the ball when it is $384 \text{ ft}$ above the ground on its way down?
$$\begin{align*}
384 &= 160t - 16t^2\\
16t^2 - 160t + 384 &= 0\\
16 (t^2 - 10t + 24) &= 0
\end{align*}$$
I can't factor the above... and I'm not supposed to used the quadratic formula. Am I stupid or is this unsolvable without without the formula?
AI: $$16(t^2 - 10 t + 24) = 0 \iff 16(t-4)(t-6) = 0 \implies t = 4,\;\; t = 6$$
Heading up: $384$ feet at time $t = 4$,
Descendng down after reaching maximum height: $384$ ft. at time $t = 6$.
Note: $$(-4)\cdot (-6) = + 24;\quad -4 + - 6 = -10$$
Noticing those facts allow you to deduce that the factors must be $$\;t^2 - 10 t + 24 = (t - 4)(t - 6)$$ |
H: Probability that we choose a two headed coin
We have a $501$ coins on the table, and assume that they have all been flipped onto that table (i.e., there is a mix of heads and tails). This also includes a two-headed coing.
Now if we pick up $1$ coin and its heads, what is the probability it is also the two-headed coin?
I've seen questions similar to this, but not the exact same. I did this out but want to hear your thoughts, before I explain how I arrived at the probability.
AI: We will use the powerful Bayes theorem.
\begin{align}
\mathbb{P}(\text{Two sided coin }\vert \text{ we have a head}) & = \dfrac{\mathbb{P}(\text{we have a head }\vert \text{ Two sided coin}) \mathbb{P}(\text{Two sided coin})}{\mathbb{P}(\text{we have a head})}\\
&= \dfrac{1 \times \dfrac1{501}}{\dfrac12 \cdot \dfrac{500}{501} + \dfrac1{501}} = \dfrac1{251}
\end{align} |
H: Find max and min of $IJ + FE + GH$
Let $D \in \triangle ABC$. Passing through D, contruct$\, FE \parallel AB, IJ \parallel AC, GH \parallel BC$. Find max and min of IJ + FE + GH
Can this problem be solved by AM-GM ? I tried $IJ + FE + GH \ge 3\sqrt[3]{\mid IJ\mid \cdot \mid FE\mid \cdot \mid GH\mid}$
and using $$\frac {FE}{AB} = \frac {CE}{CA}\implies FE = \frac {CE}{CA}\cdot AB\ ...$$
But still stuck
AI: Claim: $$\frac{IJ}{AC} + \frac{ EF} { AB} + \frac{ GH} { BC} = 2. $$
Corollary:
$$ AB + BC + CA - \max(AB, BC, CA) \leq EF + GH + IJ \leq AB + BC + CA - \min(AB, BC, CA)$$
Proof of Corollary
$x, y, z$ are real numbers from 0 to 1 subject to $x+y+z$ = 2. $a,b,c$ are non-negative values. Then, the minimum and maximum value of $ax+by+cz$ is $a+b+c-\max(a,b,c)$ and $a+b+c-\min(a,b,c)$ respectively.
Apply this to $x = \frac{IJ}{AC}, y = \frac{EF}{AB}, z = \frac{GH}{BC}$, $a=AC, b=AB, c=BC$.
Proof of claim: Use similar triangles to show that $\frac{IJ}{AC} = \frac{IJ+BI+NJ} {AB+BC+CA}$. Combine fractions and show that the numerator is twice the denominator.
Hint: $IJ = ID+DJ = AE+ HC$.
This arose from considering the equilateral triangle, and realizing that it was a constant 2 throughout. The extremes in the scalene case is obtained by setting $D$ at an extreme point, i.e. a vertex. |
H: If a 3D-cake is cut by $n$ planes yielding the maximum number of pieces, then what is the number of pieces with the cake crust?
It is known that a 3D-cake can be cut by $n$ plane cuts at most into $N$ pieces, defined by Cake Number $N=\frac {1}{6}(n^3+5n+6)$. However, some of the pieces would have a crust of the cake as one of their boundaries. How many pieces are with crust? (Entire surface of the cake is assumed to be crust).
AI: To get the maximum number, the cutting planes are in general position.
Consider the 2-D surface (crust) of the cake. Let the intersection of the cutting planes and the crust be denoted as cutting circles. Remove 1 point that is not on any of the cutting lines. Project what remains onto the plane. We now have $n$ circles in general position cutting the plane. We know that the number of regions it divides it into is
$$n^2-n+2.$$ |
H: I'm having trouble with a definition of the upper and lower limits, and a theorem that follows it.
The following is the definition.
Let $\{s_n\}$ be a sequence of real numbers. Let $E$ be the set of numbers $x$ such that $s_{n_{k}}\rightarrow{x}$. This set $E$ contains all subsequential limits, plus possibly the numbers $+\infty$, $-\infty$. Now, putting $s^* = \sup (E)$ and $s_{*} = \inf (E)$. The numbers $s^{*}$ and $s_{*}$ are called upper and lower limits of ${s_n}$. $${\lim_{n \to \infty}} \sup s_n = s^{*}$$ $${\lim_{n \to \infty}}\inf s_n = s_{*}$$
I have no idea what this means.
From my understanding, if a sequence has a limit the limit is unique. Why is this definition implying that a sequence has multiple limits?
Or is it implying that a subsequence of a sequence can have a different limit ?
My book lacks in examples and I cannot figure out what's going on at all ...
Also, I learned that $\infty$ is not a number. Why is this definition treating it as if it is one ?
Theorem. Let ${s_n}$ be a sequence of real numbers. Then $s^*$ has the following properties.
a) $s^* \in E$
b) If $x>s^*$, $\exists N \in \Bbb Z$ such that $\forall n \ge N$, $s_n < x$.
Moreover, $s^*$ is unique.
I was able to understand the proof of a), and b), but I couldn't really understand the proof of the uniqueness. The book says,
Suppose $p<q$ where both $p$ and $q$ are upper limits. Choose $x$ such that $p < x < q$. Since $p$ satisfies b), we have $s_n<x$ for $n \ge N$. But then $q$ cannot satisfy a).
1) Why is it guaranteed that such $x$ between $p$ and $q$ exist?
2) Why will $q$ not satisfy a) ?
AI: Why is this definition implying that a sequence has multiple limits?
Andres has given a good answer to this in the comments. In short, even if a sequence has no unique limit, subsequences of the sequence can have a unique limit. The quoted definition is talking about limits of subsequences of a sequence.
Why is it guaranteed that such x between p and q exist?
We assumed that $p<q$, this means that $q-p=\varepsilon$ for some $\varepsilon>0$. You can take $x$ to be $p+\frac{\varepsilon}{2}$.
Why will q not satisfy a) ?
Since $s_n<x<q$ for $n \ge N$,
$${\lim_{n \to \infty}} \sup s_n \le x<q.$$
So $q$ can't be the supremum of $s_n$ as $n\rightarrow \infty$ because $x$ will always be a tighter bound. |
H: Describing Domain of Integration (Triple Integral)
I'm really struggling to go about starting the following problem:
This question concerns the integral,
$\int_{0}^{2}\int_0^{\sqrt{4-y^2}}\int_{\sqrt{x^2+y^2}}^{\sqrt{8-x^2-y^2}}\!z\ \mathrm{d}z\ \mathrm{d}x\ \mathrm{d}y$.
Sketch or describe in words the domain of integration.
I've tried sketching the projections of this integral's domain onto the $xy$-plane, the $yz$-plane and the $xz$-plane. I established that the projection onto the $xy$-plane is the portion of the circle $x^2+y^2=4$ on $0\le x\le2,\ y\ge0$; that the projection onto the $yz$-plane is the portion of the circle $y^2+z^2=8$ above the line $z=y$, on $0\le y\le2$; and that the projection onto the $xz$-plane is the portion of the circle $x^2+z^2=8$ above the line $z=x$, on $0\le x \le2$.
I'm not completely confident that these findings are useful in any way, and if they are, I am struggling to see the link which will allow me to solve the problem.
Do you care to get me started?
Thank you very much.
AI: The bounds on $z$ correspond to equations:
$$z^2=x^2+y^2\\x^2+y^2+z^2=8$$
The first is a cone, the second is a sphere. They intersect above the $xy$-plane in a circle of radius $2$. The bounds on $x$ and $y$ show that we are considering the region in between the cone and the sphere with $x>0$ and $y>0$. Can you see it?
Let's first just consider the $xy$-plane. As you've said, the given bounds correspond to the quarter circle of $x^2+y^2=4$ that lies in the first quadrant. So whatever region we consider, it will lie completely above this quarter circle. Now $z$ lies in between the cone and the sphere as I mentioned above. You can think of this region as an ice cream cone. However, we're only considering part of the ice cream cone above our quarter circle, so its like a quarter of an ice cream cone.
Hopefully this is helpful, along with Babak's picture. |
H: How to prove that $\int_0^{\infty} \log^2(x) e^{-kx}dx = \dfrac{\pi^2}{6k} + \dfrac{(\gamma+ \ln(k))^2}{k}$?
I was answering this question: $\int_0^\infty(\log x)^2(\mathrm{sech}\,x)^2\mathrm dx$ and in my answer, I encountered the integral
$$\int_0^{\infty} \log^2(x) e^{-kx}dx$$
which according to WolframAlpha for $k=1,2,3$ and thereby generalizing gives us
$$\int_0^{\infty} \log^2(x) e^{-kx}dx = \dfrac{\pi^2}{6k} + \dfrac{(\gamma+ \ln(k))^2}{k} \tag{$\star$}$$
Also, in general, is there a nice form for
$$\int_0^{\infty} \log^m(x) e^{-kx} dx \tag{$\perp$}$$
EDIT
For $(\star)$, if we let $kx = t$, we then get
$$I = \int_0^{\infty} \log^2(t/k) e^{-t} \dfrac{dt}k = \dfrac1k\int_0^{\infty}\log^2(t) e^{-t}dt-\dfrac{2\log(k)}{k}\int_0^{\infty}\log(t) e^{-t} dt + \dfrac{\log^2(k)}k$$
Considering the above, $(\star)$ and $(\perp)$ boil down to evaluating the integral
$$\color{red}{I(m) = \int_0^{\infty} \log^m(x) e^{-x} dx} \tag{$\spadesuit$}$$
Some values of $I(m)$.
\begin{align}
I(1) & = -\gamma\\
I(2) & = \zeta(2) + \gamma^2\\
I(3) & = -2\zeta(3) - 3 \gamma \zeta(2) - \gamma^3\\
I(4) & = \dfrac{27}2 \zeta(4) +8 \gamma \zeta(3) + 6 \gamma^2 \zeta(2) + \gamma^4\\
I(5) & = -24 \zeta(5) - \dfrac{135}2 \gamma \zeta(4) - 20 \gamma^2 \zeta(3) - 20\zeta(2) \zeta(3) - 10 \gamma^3 \zeta(2) -\gamma^5
\end{align}
AI: As mentioned in this answer
$$
\int_0^\infty\log(t)\,e^{-t}\,\mathrm{d}t=\Gamma'(1)=-\gamma
$$
Using the definition
$$
\Gamma(x)=\int_0^\infty t^{x-1}\,e^{-t}\,\mathrm{d}t
$$
and taking the derivative $n$ times, we get
$$
\Gamma^{(n)}(1)=\int_0^\infty\log(t)^n\,e^{-t}\,\mathrm{d}t
$$
In the aforementioned answer, we also have
$$
\Gamma'(x+1)=\Gamma(x+1)\left(-\gamma+\sum_{k=1}^\infty\left(\frac1k-\frac1{k+x}\right)\right)\tag{$\ast$}
$$
Taking the derivative of $(\ast)$ at $x=0$ yields
$$
\begin{align}
\Gamma''(1)
&=\Gamma'(1)\left(-\gamma\right)+\Gamma(1)\zeta(2)\\
&=\gamma^2+\zeta(2)
\end{align}
$$
Taking the second derivative of $(\ast)$ at $x=0$ yields
$$
\begin{align}
\Gamma'''(1)
&=\Gamma''(1)(-\gamma)+2\Gamma'(1)\zeta(2)+\Gamma(1)(-2\zeta(3))\\
&=-\gamma^3-3\gamma\zeta(2)-2\zeta(3)
\end{align}
$$
We can use Leibniz rule and
$$
\frac{\mathrm{d}^n}{\mathrm{d}x^n}\left(-\gamma+\sum_{k=1}^\infty\left(\frac1k-\frac1{k+x}\right)\right)=(-1)^{n+1}n!\,\zeta(n+1)\quad\text{ for }n\ge1
$$
applied to $(\ast)$, to get the recursion
$$
\Gamma^{(n)}(1)=-\gamma\,\Gamma^{(n-1)}(1)+(n-1)!\sum_{k=2}^n(-1)^k\zeta(k)\frac{\Gamma^{(n-k)}(1)}{(n-k)!}
$$ |
H: Why does the differential equation $y' = y + 1$ have solution $y(x) = Ce^x - 1$?
I was watching a video on differential equations for a class that I'm taking. I took calculus so long ago that I can't seem to figure why the differential equation $y' = y + 1$ has solution $y(x) = Ce^x - 1$.
AI: $$\frac{dy}{dx}=y+1$$
$$\frac{dy}{y+1}=dx$$
$$\ln(y+1)=x+a$$
$$y+1=e^{x+a}=ce^x$$ |
H: Proof of a theorem with upper/lower limits.
Theorem: If $s_n \le t_n$ for all $n$ greater than a fixed integer $N$, then $$\lim_{n \to \infty} \inf s_n \le \lim_{n \to \infty} \inf t_n$$
I would like to prove this and it would be nice if someone could check my work.
Proof: Letting $$\lim_{n \to \infty} \inf s_n = s_*$$ and $$\lim_{n \to \infty} \inf t_n = t_*$$ assume that $s_* > t_* $ while there exists $N$ such that for all $n$ >$N$, $s_n \le t_n$.
Then there must be an integer $M$(particularly larger than $N$) such that if $m > M$, $t_m < s_*$. This contradicts the fact that for all $n > N$, $s_n \le t_n$.
AI: It is right. Another nice proof is this:
For any $N \le n$, $s_n\le t_n$, and hence $\inf \{s_n: N\le n\} \le \inf \{t_n: N\le n\}$. Then we can conclude that $$\lim_{n \to \infty} \inf s_n \le \lim_{n \to \infty} \inf t_n.$$ This complete the proof. |
H: Strong characterization of $\mathbb C$ with respect to $\mathbb R$
$\mathbb R^2$ is not a field, but $2$-tuple arithmetic rules like $(a,b)(c,d)=(ac-bd,ad+bc)$ coupled with $\mathbb R^2$ make it a field, but are there other rules than $(a,b)(c,d)=(ac-db,ad+bc)$ combined with $\mathbb R^2$ that make it a field?
Is $\mathbb C$ the only field that contains $\mathbb R$ as a subfield or are there other fields that contain $\mathbb R$ as subfields?
Is $\mathbb C$ the only that can be made by coupling $\mathbb R^2$ and rules like $(a,b)(c,d)$ $=$ $(ac-db,ad+bc)$? Are there other rules that can be coupled with $\mathbb R^2$ to make it a field?
PS: Just trying to pin down some properties that make $\mathbb C$ special/unique from other fields that can/may be constructed using some specific algebraic structure of 2 tuples in $\mathbb R$ e.g. $(\sqrt 2,\pi)$. At this point this is just a wide net as I lack the terminology to make this question specific. But any hints on how to make this question specific to listing properties that make $\mathbb C$ unique or gives strong characterisation of relationship between $\mathbb C$ and $\mathbb R$ are welcome.
AI: For any field $F$, the field $F(t)$ of rational functions in the variable $t$ is a field that contains $F$. The degree $[F(t):F]$ is infinite, and $F(t)$ is a transcendental (i.e., non-algebraic) extension of $F$. Either of these observations suffices to check that $\mathbb{R}(t)\not\cong\mathbb{C}$ (as extensions of $\mathbb{R}$), because $[\mathbb{C}:\mathbb{R}]=2$ and $\mathbb{C}/\mathbb{R}$ is algebraic. Moreover, there is not even an abstract isomorphism between $\mathbb{R}(t)$ and $\mathbb{C}$ (i.e., one which does not necessarily respect the way $\mathbb{R}$ sits inside each of them) because $\mathbb{R}(t)$, and indeed $F(t)$ for any field $F$, is not algebraically closed, whereas $\mathbb{C}$ is.
Also, this is up to isomorphism; technically, if you've chosen what you mean by "$\mathbb{C}$" and I take your $\mathbb{C}$ and paint the elements of $\mathbb{C}\setminus\mathbb{R}$ red, that is a field different from $\mathbb{C}$ that contains $\mathbb{R}$. Similarly, $\mathbb{C}$ and $\mathbb{R}^2$ are different things; yes, $\mathbb{C}$ is a two-dimensional real vector space, and so we often think of it as a plane, but this is an identification we are making - they are not literally equal.
Yes, if $F$ is a field that contains $\mathbb{R}$ and $[F:\mathbb{R}]=2$ (i.e., $F\cong \mathbb{R}^2$ as real vector spaces), then $F\cong \mathbb{C}$. This is because $[F:\mathbb{R}]$ finite $\implies$ $F$ is an algebraic extension of $\mathbb{R}$, so $\overline{F}$ (the algebraic closure of $F$) is an algebraically closed field containing $\mathbb{R}$ and which is an algebraic extension of $\mathbb{R}$, so that $\overline{F}$ is also an algebraic closure of $\mathbb{R}$, and therefore $\overline{F}\cong\mathbb{C}$.
The most straightforward characterization of $\mathbb{C}$ is just that it is an algebraic closure of $\mathbb{R}$. It is also true that $\mathbb{C}$ is (up to isomorphism) the unique algebraically closed field of characteristic $0$ and cardinality $\mathfrak{c}$. |
H: Minimum number of coconuts
Three friends namely $A$, $B$ and $C$ collected coconuts with the help of monkey and fell asleep. At night, $A$ woke up and decided to have his share. He divided coconuts into three shares, gave the left out coconut to the monkey and fell asleep. In the same way in order $B$ and $C$ got up and not knowing whether anybody woke up divided the coconuts, giving the left out coconut to the monkey. Early in the morning all of them woke up together, divided the coconuts into equal shares and gave the left out coconut to the monkey.
What is the minimum number of coconuts they counted?
AI: Let $n$ be the total.
$A$ first takes $\dfrac{n-1}3$ and $1$ goes to the monkey. Left out coconuts is $n-1-\dfrac{n-1}3 = \dfrac{2(n-1)}3$.
After $B$ does the same at night, left out coconuts is $\dfrac23 \cdot \left(\dfrac23 (n-1)-1 \right)$.
After $C$ does the same at night, left out coconuts is $\dfrac23 \cdot \left(\dfrac23 \cdot \left(\dfrac23 (n-1)-1 \right) -1\right)$.
Let $m$ be the number of coconuts left in the morning.
Hence, we have
$$\dfrac23 \cdot \left(\dfrac23 \cdot \left(\dfrac23 (n-1)-1 \right) -1\right) = m \implies n = \dfrac{27m+38}8$$
Let the minimum number of coconuts left in the morning be $m$.
This is an integer only when $$27m \equiv 2 \pmod8 \implies m \equiv 6 \pmod8$$ We also know that $m=3k+1$, since they share in the morning and give one to the monkey. Hence,
$$m \equiv 6 \pmod8 \text{ and } m \equiv 1 \pmod3 \implies m \equiv 22 \pmod{24}$$
Since, we want $n$ to be a minimum, so has to be $m$. Hence, we have $$\color{red}{m=22 \implies n = 79}$$ |
H: probability density functions
Suppose $Y$ is a random variable pdf $f(y)=ky , y=3/n,6/n,9/n...,3n/n$
Find the value of the constant $k$ and write down $Y$'s cdf.
Find simple general expressions for $EY, \text{Var} \,Y, P(Y=3/2)$ and $P(Y>3/2)$
For the case $n=10$, evaluate $EY, \text{Var}\,Y,P(Y=3/2)$ and $P(Y>3/2)$, then write down $Y$'s pdf and cdf.
Repeat (3) for the limiting case as $n$ tends to infinity.
HINT: The random variable in (4) has cdf equal to the limit of the cdf (1) as $n\to \infty$.
Wondering if it's possible to consider $Y$'s cdf by cases, like when $y=3/n,y=6/n...$ and sum up to get the value of k.
Could anyone help? thanks.
AI: Indeed. For the function $f$ to be a valid PDF one must have that $f(y)\geq 0$ for all possible $y$-values and that
$$
\sum_{y}f(y)=1\tag{1}
$$
where the sum is taken over the possible values of $y$. Note that the possible values $$\frac3n, \frac6n,\ldots,\frac{3n}n$$ are of the form
$\frac{3i}{n},$ where $i=1,2,\ldots,n$, and hence $(1)$ becomes
$$
\sum_{i=1}^n k\frac{3i}{n}=1.
$$
Using that $\sum_{i=1}^n i = \frac{(n+1)n}{2}$ we obtain that
$$
1=\frac{3k}{n}\sum_{i=1}^ni=\frac{3k}{n}\frac{n(n+1)}{2}=k\frac32(n+1)
$$
exactly as proposed by @Argha.
To find the CDF of $Y$ we note that $P(Y=y)=f(y)=ky$ for $y=\frac{3i}{n}$ for some $i=1,\ldots,n$. The CDF is by definition given by
$$
F(y)=P(Y\leq y)=\sum_{y'\leq y}f(y')
$$
so you're simply summing $f(y')$ for the values $y'$ that are less than $y$. Hence $F(y)$ becomes a step function that jumps exactly at the values $\frac{3i}n$ for $i=1,\ldots,n$. |
H: Relation between chords length and radius of circle
Two chords of a circle, of lengths $2a$ and $2b$ are mutually perpendicular. If the distance of the point at which the chords intersect,from the centre of the circle is $c$($c<$radius of the circle),then find out the radius of the circle in terms of $a,b$ and $c$.Show some short-cut to do it quickly
AI: Let $P$ be the point where the two chords (and a diameter) meet. Let $h$ (and $k$) be the distance from $P$ to the midpoint of the $2a$ chord (respectively, the $2b$ chord); that is, say $P$ divides the chord into sub-segments of length $a+h$ and $a-h$ (respectively, $b+k$ and $b-k$). Note that $P$ divides a diameter into sub-segments of length $r+c$ and $r-c$ (where $r$ is the radius of the circle); note also that $c$ is the hypotenuse of a right triangle with legs $h$ and $k$: so, $c^2 = h^2 + k^2$.
The Power of a Point principle says that every chord through a particular point of a circle is divided into sub-segments such that the product of the lengths of those sub-segments is a constant (the so-called "power" of the point in question). Thus,
$$(a+h)(a-h) = (b+k)(b-k) = (r+c)(r-c)$$
More succinctly,
$$a^2 - h^2 \;\;=\;\; b^2 - k^2 \;\;=\;\; r^2 - c^2$$
With an eye towards combining an $h^2$ with a $k^2$, I'll add the left-hand and "middle-hand" sides together; their sum is necessarily twice the right-hand side:
$$\begin{align}
( a^2 - h^2 ) + ( b^2 - k^2 ) &= 2 (r^2 - c^2) \\
\implies a^2 + b^2 - ( h^2 + k^2 ) &= 2 r^2 - 2 c^2 \\
\implies a^2 + b^2 - c^2 &= 2 r^2 - 2 c^2 \\
\implies a^2 + b^2 + c^2 &= 2 r^2
\end{align}$$ |
H: Why is it meaningless for a closed set to be polygonal path connected?
My textbook (Complex Analysis by Saff & Snider) defines connectedness for open sets; the given definition of a connected open set is: a set in which every pair of points can be joined by a polygonal path that lies entirely in the set.
Using the given definition of connected set, I don't understand why it isn't similarly defined for closed sets too?
AI: Polygonal connectedness (which is a stronger condition than topological connectedness) can be defined for any subset of the complex plane. The definition will be, as you'd expect, that a set $A\subseteq \mathbb C$ is polygonally connected if any two points in it can be connected by a polygonal path inside $A$.
The authors simply did not bother with the most general possibility. They will mostly care about subsets of the plane that are open and polygonally connected. The reason is that in complex analysis such subsets are very important. Openness is important because you want to be able to talk about a function being holomorphic in the given region, and if the region is not open this leads to annoying technical difficulties. Polygonal connectedness is important because it means any function defined on the region can be integrated between any two points in the region. These are things that happen often in complex analysis, and so you will see lots and lots of theorems that say something like "if $f$ is holomorphic on an open polygonally connected set, then ....". |
H: Simple Math Equation find sum of 4 numbers and if greater then number X reduce all 4 numbers respectively
Im not the greatest at Math but i have the following problem:
impressions = 791.
watched 100 = 500
watched 75 = 383
watched 50 = 600
watched 25 = 700
The sum of all watched fields is 2183.
Since the sum of all watched fields is greater then impressions i want to reduce the numbers in each of the 4 fields until the sum of all 4 fields equal impressions.
I have tried many methods such as finding the percentage of 791 in 2183. Then using that percentage reduce each 4 fields respectively, but i never get a flush number.
Any ideas?
Edit: Example of 1 of my many failed attemps :p
2183 - 791 / 2183 = 0.63
0.63
500 X 0.63 = 315
500 - 315 = 185.
383 x 0.63 = 241.29
383 - 241.29 = 141.71
600 x 0.63 = 378
600 - 378 = 222
700 x 0.63 = 441
700 - 441 = 259
answer : 809
809 != 791 :(
AI: Let $s$ be a scaling factor defined as $s = 791/2183.$
$500\times s = 181.17\ldots$
$383\times s = 138.77\ldots $
$600\times s = 217.40\ldots$
$700\times s = 253.64\ldots$
Add these up:
$181.17\ldots+138.77\ldots+217.40\ldots+253.64\ldots = 791$ |
H: When speaking of neighbourhoods in complex analysis, are we always referring to circular neighbourhoods?
In Complex Analysis, does "neighbourhood" automatically mean "circular neighbourhood", or do non-circular ones exist?
AI: The term "neighborhood" of a point comes from topology, where it is taken to mean any open set containing that point. This convention carries over into complex analysis.
Often, a circular neighborhood of a point and a more general neighborhood of a point are interchangeable in practice. If you need a circular neighborhood of a point, it suffices to find a general open set containing the point, since then you can just take a circular neighborhood contained in that open set; and if you need a general open set, of course a circular neighborhood will suffice. So topologically there really is no distinction.
Besides, many theorems in complex analysis that use circular neighborhoods can be adapted somewhat to more general regions, the idea behind the Riemann mapping theorem. |
H: Does an irreducible $\mathbb CG$-module have a basis of the form $u,ug_1,\dots,ug_n$?
Suppose that $U$ is an irreducible $\mathbb CG$-module and $u\in U$. Let $\operatorname{span}(u_1,\dots,u_k)$ denotes the linear span of vectors $u_1,\dots,u_k\in U$.
I was thinking along these lines: there must exists some $g\in G$ such that $ug\notin\operatorname{span}(u)$, otherwise $\operatorname{span}(u)$ would be a nontrivial $\mathbb CG$-submodule. So I have two linearly independent vectors $u,ug\in U$.
Now $\operatorname{span}(u,ug)$ again is not closed under multiplication by group elements, otherwise it would be a nontrivial submodule. So I have some $(au+bug_1)g\notin \operatorname{span}(u,ug)$, which implies that either $ug$ or $ug_1g$ does not belong to $\operatorname{span}(u,ug)$.
I could continue in this way until I reach dimension of the $\mathbb CG$-module $U$.
However I have not seen this result in the introductory chapters of the book on representations I am studying. Which makes me think that either this result is not very useful, or that I have made a mistake somewhere. So is what I conjectured above true?
If $U$ is irreducible $\mathbb CG$-module and $u\in U$, does there exists a basis for $U$ consisting of vectors of the form $u,ug_1,\dots,ug_n$, where $g_1,\dots,g_n\in G$?
AI: Sure.
If $u$ is any non-zero vector of an irreducible module $V$, then the subspace spanned by $X=\{gu:g\in G\}$ is a non-zero submodule. Irreducibility implies then that $X$ spans $V$ and therefore linear algebra tells you that a subset of $X$ is a basis $U$. |
H: The closes point to a curve in space.
I am working on the following problem.
Find the point closest to the origin, of the curve of intersection of the plane $2y+4z =5$ and the cone $z^2 = 4(x^2+y^2)$
I was able to see that the intersection will be a tilted ellipse with coordinates $(x,y,4(x^2+y^2))$.
So I was thinking that I could try putting $F(x,y) = \sqrt{x^2+y^2+16(x^2+y^2)^2}$ and optimize it. But this seems to be too long a calculation and I want to somehow use vectors, such as dot products and gradients because that seems to be easier.
Can anyone help me get started ?
AI: This is a typical exercise for Lagrange multipliers theorem (see http://en.wikipedia.org/wiki/Lagrange_multiplier). You want to refolmulate the problem to the following:
Find the minimum of function $\sqrt{x^2 + y^2 + z^2}$ (that is the distance from origin, you can even use easier function x^2 + y^2 + z^2, because the solution will be the same) on the set which is intersection of the plane $2y+4z=5$ and the cone $z^2=4(x^2+y^2)$.
I guess you know a little about Lagrange multipliers, this shouldn't be hard to compute. |
H: Series expansion with remaining $\ln n$
I'm studying the asymptotic behavior $(n \rightarrow \infty)$ of the following formula, where $k$ is a given constant.
$$ \frac{1}{n^{k(k+1)/(2n)}(2kn−k(1+k) \ln n)^2}$$
I'm trying to do a series expansion on this equation to give the denominator a simpler form so that it is easier to make an asymptotic analysis.
I used mathematica/wolframalpha to expand the formula (the documents say Taylor expansion is used).
http://www.wolframalpha.com/input/?i=1%2F%28n%5E%28k+%28k+%2B+1%29%2F%282+n%29%29+%282+k+n+-+k+%281+%2B+k%29+Log%5Bn%5D%29%5E2%29
However in series expansion at $n \rightarrow \infty$, the result still has $\ln n$. This is actually a form I prefer, compared to the form
$$a_0 + a_1x + a_2 x^2+...$$
Does anyone see how the result may be produced? Any help is much appreciated. Thanks.
AI: Hint: $n^{k(k+1)/(2n)}=\mathrm e^{k(k+1)\log(n)/(2n)}=1+u+u^2/2+\ldots$ with $u=k(k+1)\log(n)/(2n)$.
This yields the expansion
$$
\frac1{4k^2n^2}\left(1-\frac{(k+1)(k-2)}2\frac{\log n}n+O\left(\left(\frac{\log n}n\right)^2\right)\right).
$$ |
H: Problem with Discrete Parseval's Theorem
I think I must be missing something obvious, but I can't for the life of me see what it is. The discrete version of Parseval's theorem can be written like this:
$\sum_{n=0}^{N-1} |x[n]|^2 = \frac{1}{N}\sum_{n=0}^{N-1} |X[k]|^2 $
Now, say you've got some function in time, like $x = \sin(\omega t)$. Depending on how large your value of $N$, the LHS could be arbitrarily large.
The FT of this function is a delta function at $\omega$. Everywhere else is zero, so your RHS is given by $1/N$. So how on earth does one side equal the other?! It seems to me like you've got a summation one one side and an average value on the other...
I must be missing something obvious!
AI: I completely understand your confusion. It is indeed not so obvious. First of all, you have to be careful which version of Parseval's theorem you are considering. For continuous-time signals or for discrete-time signals, and in the second case for infinitely long signals or for the DFT relation, where only signals of length $N$ are considered. The way you've stated it is the DFT version. This means that the transform of a sampled sinusoid is usually NOT a delta impulse unless the frequency is $2\pi k/N$ (i.e. an integer number of periods fit in the DFT interval of length $N$). If you consider the discrete-time case for infinitely long signals, then neither of the two sums converges in the case of a sinusoidal signal.
Example:
$x(n)=\cos\frac{2\pi m}{N}n$ with $0\le m\le N-1$. In this case the DFT
$$X(k)=\sum_{n=0}^{N-1}x(n)e^{-jn\frac{2\pi k}{N}}$$ is indeed given by two discrete deltas:
$$X(k)=\frac{N}{2}\left [ \delta(k-m) + \delta(k+m-N)\right ]$$
Applying Parseval's theorem gives
$$\sum_{n=0}^{N-1}|x(n)|^2=\frac{1}{N}\sum_{k=0}^{N-1}|X(k)|^2=\frac{N}{2}$$ |
H: Open, closed and continuous
I have some troubles to understanding something:
We were asked to find a function that is open and continuous but not closed and actually I found such a function, but our tutor gave us this example
$ e^x$ since this function is continuous as a mapping $$\exp: \mathbb{R} \rightarrow \mathbb{R}_{>0}$$ and the inverse function is continuous too, so this function is also open.
and the counterexpample was, that $\mathbb{R}$ is closed and is mapped to $\mathbb{R}_{>0}$ which is open.
then i thought, hey if the inverse function is continuous that means that the fiber of closed sets have to be closed, but $\ln^{-1}(\mathbb{R})$ is not closed at all. where am i wrong?
AI: Your tutor is wrong. Every homeomorphism (such as $\exp:\mathbb R\to\mathbb R_{>0}$) is closed. The set $\mathbb R_{>0}$ is a closed set in the standard topology on $\mathbb R_{>0}$. |
H: Measures on all subsets of $\aleph_0$
A theorem of Ulam says:
A finite measure $\mu$ defined on all subsets of a set of cardinality $\aleph_1$ must be $0$ for all subsets if it sends every $1$-element subset to $0$.
Will this statement hold if we substitute $\aleph_0$ for $\aleph_1$ and require the measure to be finitely additive only? In particular, could we define a nontrivial finite measure on the set of all finite graphs that assigns each finite set a measure $0$?
AI: The finite additive measures are somehow easy. There exists a non-trivial finite additive measure on any infinite set, which is $0$ on any finite subset. What makes all the problems with measures is countable additivity.
The key to the solution is the word `ultrafilter'.
A filter on set $\aleph_0$ is a system of sets $\cal F$ (system of some `big' sets, in our case that will be sets of non-zero measure) which satisfies
if $A\in \cal F$, and $B\supseteq A$ then $B\in \cal F$,
if $A,B\in \cal F$ then $A\cap B\in \cal F$,
$\aleph_0\in \cal F$, and $\emptyset \neq \cal F$.
An example of such system is for example the system of all infinite subsets of $\aleph_0$.
A filter is an ultrafilter if it is maximal in respect to inclusion, i.e. we have to add one condition: `for every $A$, either $A \in \cal F$, or $\aleph_0 \setminus A \in \cal F$'. By axiom of choice (Zorn's lemma) we know that every filter can be extended to an ultrafilter.
So fix an ultrafilter $\cal F$ which extends the system of all infinite subsets of $\aleph_0$. Then we can finally define the measure as:
$$
\mu(A) = \cases{
1 & if $A\in \cal F$,\cr
0 & if $A\notin \cal F$.\cr
}
$$
You can check that this is a finitely additive measure. Well, all the dificult steps have been done in the ultrafilter notation which can be very confusing. If you want to understand ultrafilters better, you can solve the following exercise:
Given a finitely additive measure $\mu$ defined on all subsets of $X$, such that $\mu(X) = 1$, show that the systems of sets $\{ A : \mu(A) = 1 \}$, and $\{ A : \mu(A) \neq 0\}$ are filters on $X$.
There are also many other filters on $X$ that can be defined by the measure $\mu$, unless $\mu$ gives just values $0$, or $1$, in that case there is only one such filter, which is also an ultrafilter. |
H: inner product space and polynomial
Let $V = \mathrm{span}\{1,x,x^2,x^3\}$ be a real inner product space with the
inner product defined by
$$
\langle f,g\rangle =\int\limits_{-1}^{1} fg
$$
Check that $T(f) = f(0)$ is a linear functional on this space and find the element of $V$ which represents it (corresponds to).
So how should I do it? From where to begin?
I am guessing the first part is easy because I have to prove that
$$T(f+g)=(f+g)(0)=f(0)+g(0)=T(f)+T(g)$$
and that
$$T(af)=af(0)=aT(f)$$
but what about this vector that I need to find?
I know that it has something to do with Gram-Schmidt but I do not know how.
AI: As you said, in order to prove that $T[f] = f(0)$ is a linear functional you just need to check the two conditions of linearity, ie.
$$T[f + g] = (f + g)(0) = f(0) + g(0) = T[f] + T[g]$$ and
$$T[af] = af(0) = aT[f]\qquad \forall a \in K = \mathbb{R}\ \text{or}\ \mathbb{C}.$$
Now you are asked to find an element representing a linear functional, therefore I suppose you are asked to use the Riesz Representation Theorem. This is what I am talking about: http://en.wikipedia.org/wiki/Riesz_representation_theorem.
Roughly speaking, you are asked to look for the vector $g$ satisfying the equality
$$Tf = (f,g).$$
Before doing this just a couple of observations:
$1)$ I suppressed the notation [] as is usually done with linear operators.
$2)$ I am assuming that the field is $\mathbb{R}$ only because there is no conjugacy in the integral defining the inner product.
$3)$ The existence of such a $g$ is given by the above Theorem by Riesz. (It is also unique!)
Finally the problem is just a matter of calculations: let $$g = a x^3 + bx^2 + cx + d.$$ It is determined if we determine its coefficients. To do this just remember that the integral of an odd function over an interval symmetric with respect to the origin equals $0$ and it is two times the integral on the positive subinterval if the function is even. In formulas, if $f(x) = -f(-x)$ then $\int_{-1}^1f(x)dx = 0$, while if $f(x) = f(-x)$ we have $\int_{-1}^1f(x)dx = 2\int_0^1f(x)dx$.
Now let's compute $\int_{-1}^1f(x)g(x)dx$ for some (non-random) $f$.
If f$(x) = 1$ then we impose
$$1 = f(0) = Tf = \int_{-1}^1(a x^3 + bx^2 + cx + d)dx = 2\int_0^1(bx^2 + d)dx = \frac{2}{3}b + 2d.$$
If $f(x) = x^2$ then we impose
$$0 = f(0) = Tf = \int_{-1}^1(ax^5 + bx^4 + cx^3 + dx^2)dx = 2\int_0^1(bx^4 + dx^2)dx = \frac{2}{5}b + \frac{2}{3}d$$
If can now find $b$, $d$ by solving the system
$$1 = \frac{2}{3}b + 2d$$
$$0 = \frac{2}{5}b + \frac{2}{3}d.$$
You now you should be able to continue by yourself, and you should also have an idea of what could be nice choises for $f$ to find $a$ and $c$.
I hope this helps! |
H: Is there an element in $^* \Bbb N$ is Dedekind-infinite?
One definition of a finite set is that it can be injected into an initial segment of $ \Bbb N$, thus any $n$ in $\Bbb N$ is finite.
Accordingly, if it's legitmate to define every element in $^* \Bbb N$ as its own intial segment, then every element $^* \Bbb N$ is finite.
What about Dedekind-infinity? Can we define an Dedekind-infinite element in $^* \Bbb N$ by constructing an injection into its proper subset?
Added: I'm not sure whether the construction of $^* \Bbb N$ is compatible with $\bf ZFC$ in the same way as $\Bbb N$ is, which is probably not, I guess. But how?
AI: There is a problem with this idea because we use $\in$ to construct $\omega$ which is a surrogate for $\Bbb N$ in $\sf ZFC$. This is fine because both $\in$ and $\Bbb N$ are well-founded.
But $^*\Bbb N$ is not well-founded. The non-standard integers must have a decreasing sequence. So we cannot use $\in$ to model them in a model of $\sf ZFC$. Of course, we can have non-standard models of $\sf ZFC$ whose $\omega$ is actually $^*\Bbb N$, but that's besides the point. Inside these models, the set which they think is $\omega$ is well-founded (internally!).
You could perhaps use $\subseteq$ to define the order $^*\Bbb N$ somehow and use $\omega$ for the initial segment of the standard integers. In that case it shouldn't be hard to note that the standard integers correspond to Dedekind-finiteness, and the non-standard integers are Dedekind-infinite sets.
Why? Because any non-standard integer would have to have infinitely many smaller integers, which correspond to infinitely many subsets, which means that it is a Dedekind-infinite set (remember we assume $\sf AC$).
To be fair, though, I don't see what good comes from that definition. |
H: Solving the domain and range of a region satisfying two inequalities?
The question I was provided was:
"Find the domain and range of the region satisfied by the following inequalities:
i) $y \ge (x-1)^2$
ii)$y \le2x+1$
Any help would be greatly appreciated. Would you recommend graphing or solving algebraically?
AI: I am giving you a very basic way to find out the regions graphically. For $$y\ge(x-1)^2$$ Note that $y=(x-1)^2$ is a polynomial which has $\mathbb R$ as domain and obviously $\mathbb R_{\ge 0}$ as its range. To find out what that inequality tells you, you can pick two points in and out of the region the parabola made in $\mathbb R^2$. As you see $P$ is in and $Q$ is out with respect the parabola. Now satisfy the coordinates of $P$ into the inequality. You see $$(1/2-1)^2=1/4$$ and it is smaller than $2$. The same for $Q$ tells us $$(3-1)^2=4$$ is greater than $1$. So, the desired region satisfying $$y\ge(x-1)^2$$ is the parabola an the region which is enclosed by it. |
H: Integrate: $\int_{a - i\infty}^{a + i\infty} \frac{e^{tz}}{z^2 + p^2}dz$
Q. Show that : $$\int_{a - i\infty}^{a + i\infty} \frac{e^{tz}}{z^2 + p^2}dz = \frac{\sin pt}{p}$$
I considered the following contour
$$\int_\Gamma \frac{e^{tz}}{z^2 + p^2}dz + \int_{a - i\infty}^{a + i\infty} \frac{e^{tz}}{z^2 + p^2}dz = 2 \pi i (\text{Residue}[f(z), z=ip] + \text{Residue}[f(z), z=ip]) \; \; \; \;\; \;(1)$$
The first integral goes to zero.
$$\text{Residue}[f(z), z=ip] + \text{Residue}[f(z), z=-ip]= \frac{\sin (pt)}{p} \hspace{1cm} (2)$$
From $(1)$ and $(2)$ we get
$$\int_{a - i\infty}^{a + i\infty} \frac{e^{tz}}{z^2 + p^2}dz = 2 \pi i \frac{\sin pt}{p}$$
Did I make any mistake or (answer according to) question is wrong?
AI: I do not use the contour shown in the figure. I prefer going, from the top of the vertical line at $a+i R$, to the left horizontally to $z=i R$, and then going in a semicircular arc between $\theta \in [\pi/2,3 \pi/2]$. In this case, the integral over the horizontal portion to the left goes as
$$\frac{e^{i R t}}{R^2-p^2} \frac{1-e^{a t}}{t} $$
as $R \to \infty$. We get a similar expression for the horizontal segment at $z=-i R$.
It then becomes crystal clear that the integral along the circular arc vanishes, because the integral there looks like
$$i R \int_{\pi/2}^{3 \pi/2} d\theta e^{i \theta} \frac{e^{R t \cos{\theta}} e^{i R \sin{\theta}}}{R^2 e^{i 2 \theta}+p^2}$$
The magnitude of this integral vanishes as $R \to \infty$ because the cosine term is negative over the integration region. So, at least along this contour, the integral vanishes. |
H: Why do these trig functions "overpower" each other?
For example, $\sin(x)\cos(x)$ can be written as $\sin(2x)/2$, the limit as $x$ approaches $0$ of $\sin(x)\cos(x)$ is $0$, and the limit as x approaches $\pi/2$ is $0$. I don't see a reason why sine always increases/decreases faster than cosine. Why does sine overpower cosine near x=0, and why does cosine overpower sine near x=pi/2, I feel that if they did increase the same the limit should be somewhere in the middle, such as 1/2 for both of these values. If you try to take the derivative of each with respect to some other to see why one increases faster, you end up with the same problem because the derivatives just keep giving you negative versions of the original problem.
AI: the limit of $\sin(x)\cdot \cos(x)$ as $x\rightarrow \pi /2$ is $0$.
So in both cases you have $\sin(x)\cdot \cos(x)=0$.
Infact they do increase/decrease with the same rate. Infact $\sin(\pi/2-x)=\cos(x)$ if you vary $x$ they have the same values so they increase/decrease with the same rate. |
H: Solve a simple equation with log in it
I'm stuck with solving this equation,
$$2 \log x = \log 9 $$
This is how far I made it:
\begin{align}
\log x &= \log 4,5 \\
x &= ?
\end{align}
I'm a beginner at logarithms so I appreciate ways to solve it and not just an answer.
AI: If we start with $$2 \log x = \log 9,$$ the first step is to move the 2, but you can't divide it over like that. The rule is that $a \log b = \log b^a$, so we get $2 \log x = \log x^2$.
Now our equations is $$\log x^2 = \log 9.$$ The next step is to use the fact that $\log A = \log B$ means $A = B$. In our case, that means $$x^2 = 9,$$ and you can solve for $x$. Remember to check that the answer makes sense. For instance $\sqrt{9} = \pm 3$, but you can't take the log of a negative, so $x \neq -3$. |
H: if $p\implies q$ is the same as $\lnot p \lor q$, then...
If $p\implies q$ is the same as $\lnot p \lor q$, then what is $p\implies \lnot q$?
I'm not sure if this is $\lnot p \lor \lnot q$, or $\lnot p \lor q$.
I'm trying to figure this out, because i have a problem:
~(q v p) --> ~r). I use demorgans law on this to make it ~p ^ ~q --> ~r. Then I need to make it simpler, my understanding would be that it MIGHT be ~(~p ^ ~q) v ~r) which might be (p ^ q) v r. However, I'm almost 100% sure i'm wrong.
I can't wrap my head around logic, It's so difficult for me to comprehend all of these rules; could someone please explain the answer to my original question, and provide the correct answer for the ~(q v p) --> ~r) question; and where I went wrong? I would be very grateful.
p.s. if you know of any easy to comprehend resources for logic that would be great.
Thanks.
AI: Well, $p \to r$ is the same as $\neg p \vee r$. Thus if $r=\neg q$, this means that $p \to \neg q$ is the same as $\neg p \vee (\neg q)$, which by order of logical operations can be written $\neg p \vee \neg q$.
No need to worry about De Morgan's laws! |
H: Prove $\int_2^\infty{\frac{\ln(t)}{t^{3/2}}},\mathrm{d}t$ converges
Show, using a comparison test, that $\displaystyle \int_2^\infty{\frac{\log{t}}{t^{\frac32}}}\mathrm{d}t$ converges.
All the answers I've tried shows it diverges, taking $\log{t} \le t^{1/2}$ and $\log{t} \le t$.
Cheers
AI: Solving $\log t = t^{1/4}$ we get $t_1=4.177, t_2=5503.66$ from W|A.
Differentiating $(\log t)' = \frac{1}{x}$ and $(t^{1/4})' = \frac 1 4 t^{-3/4}$. And $t>t^{3/4}\implies 1/t<t^{-3/4}$.
for $t \ge t_2$, $\log t \le t^{1/4}$.
$$\int_2^\infty \frac{\log t}{t^{3/2}}dt = \int_2^{t_2} \frac{\log t}{t^{3/2}}dt + \int_{t_2}^\infty \frac{\log t}{t^{3/2}}dt \le \int_2^{t_2} \frac{\log t}{t^{3/2}}dt + \int_{t_2}^\infty \frac{t^{1/4}}{t^{3/2}}dt$$.
We know that $\displaystyle \int_{t_2}^\infty \frac{t^{1/4}}{t^{3/2}}dt$ converges. So the integral converges by comparison.
Integrating by parts::
$$\int_2^\infty \frac{\log x}{x^{3/2}}dx = -\frac{2 \log x}{\sqrt x} \big |_2^\infty - \int_2^{\infty} \frac 1 x \frac{-2}{\sqrt x}dx = \sqrt 2 \log 2 + 2 \sqrt 2 $$ |
H: $p^{3}+m^{2}$ is square of a number.
Well i thought it is a nice problem so i will post it here.
1) Prove that for every natural numbers $m$, There is at most two primes $p$ where $p^{3}+m^{2}$ is a perfect square.
2) Find all natural numbers $m$ so that $p^{3}+m^{2}$ is the square of a number for exactly $2$ prime numbers $p$.
AI: I have my own answer so tell me if you had a better answer :)
Suppose that $p^{3}+m^{2}=b^{2}$ for some natural numbers $b$. So, $(b-m)(b+m)=p^{3}$ We know that the number $p^{3}$ can only be written as $(1)(p^{3}),(p)(p^{2}),(p^{2})(p),(p^{3})(1)$. Knowing $b+m>b-m$ makes the only answers of $(b-m,b+m)$ be $(1,p^{3}),(p,p^{2})$.
Case 1: $b-m=1$ and $b+m=p^{3}$
This results in $p^{3}-1=2m$ and so $p=\sqrt[3]{2m+1}$
Case 2: $b-m=p$ and $b+m=p^2$
This one results in $p^2+p=2m$ and so $p=\frac{1+\sqrt{8m+1}}{2}$ (since the negative one is unacceptable.)
This itself solves the first part since for every $m$, $p$ can only be first one, second one, both, or non.
Now we have to find all numbers $m$ so that both $\frac{1+\sqrt{8m+1}}{2}$ and $\sqrt[3]{2m+1}$ are prime. We first find all values of $m$ which $8m+1$ is a square. Suppose $8m+1=n^2$ for some numbers $n$. Then, $(4m+1)^2=n^2+(4m)^2$. Using pythagorean theorem, we find that all possible values of $m$ are in the form of $\frac{x(x+1)}{2}$. Putting $m$ in the first case results in $2*\frac{x(x+1)}{2} +1=p^{3}$ and so $x^2+x+1=p^3$. Now using the solutions which are given in here, the only possible values of $(x,p)$ becomes $(18,7)$, and so $m=\frac{x(x+1)}{2}=\frac{18*19}{2}=171$.Checking this answer in both cases, we obtain $p=19$ and $p=17$. So the only possible $m$ becomes $171$. |
H: very basic short exact sequence problem
Given a short exact sequence $0 \rightarrow A \rightarrow B \rightarrow C \rightarrow0$ and $f:A \rightarrow B, g: B \rightarrow C$, why is $C$ isomorphic to $B/A$? All I can show is that $C$ is isomorphic to $B/ Im (A)$. I've look up a few books but I still don't understand why $Im (A) = A$.
AI: Strictly speaking, $B/A$ doesn't make sense unless $A\subseteq B$. However, since the second arrow is a monomorphism, you can identify $Im(A)$ with $A$ (they are isomorphic).
So you're exactly right that $B/Im(A)\cong C$, and you go one step further by considering $A$ and its monomorphic image as the same thing. |
H: Complexified tangent vector, complex manifold
Consider a complex submanifold $M$ of a complex ambient vector space $X$. Suppose you have a base point $p \in M$ and a $C^1$ arc $\gamma(t)$ passing through $p$ and staying in $M$, with tangent vector at $p$ denoted by $u$ (the tangent vector is taken by considering $X$ as a real vector space, and $\gamma : [0,1] \rightarrow M$). Is there a complex analytic curve $\tilde{\gamma} : \Delta \rightarrow M$ ($\Delta$ being a complex disk) passing through $p$, whose image is in $M$, with tangent vector at $p$ equal to $u$ (as an element of $T_pM$) ? What if the curve $\gamma$ is only differentiable at $p$ ? Intuitively I think the answer is yes, but I can't construct such a curve $\tilde{\gamma}$. Thanks in advance for your help
AI: Just work in local coordinates, so you're starting with vector $v\in\mathbb C^k$, and consider the complex line spanned by $v$. |
H: Counting Number of Objects - When to Add One Back
I know this might be a very basic question. Sometimes to count objects, we just subtract. For example -- If there are 5 apples and I take away 1, then remaining are $5 - 1 = 4$ apples.
But other times, we have to add one back.
For example -- the number of days from the 1st to the 5th of any month = $5 - 1 + 1 = 5$. I added the last one back because I'm including the 1st.
What are these properties formally called? Are there more cases? Are there more formal explanations and properties of these cases in combinatorics?
AI: It's a little bit subtle.
When we think about apples we say that $5$ apples is four more apples than $1$ apple. Because we're comparing the set of 5 whole apples to one whole apple.
When we think about days you're comparing the end of the first of June to the beginning of the fifth of June.
So the difference in days is only three.
So in fact it's a question of language not mathematics. If your train is leaving Ambridge, and it stops at Brumley, Catford, and Dronby before arriving at Endton where you get off. You might say there are four stops until Endton (Brumley, Catford, Dronby and Endton) or you might say there are three (Brumley, Catford and Dronby). It depends which country you come from. In England we say four, in many European countries they say three. I haven't done an exhaustive survey. Most people are pretty convinced their way is right, but it's just convention.
So the only difference is whether the last item is included. Which differs by convention depending what the items are. As mathematicians we think imprecision is bad so we don't talk about this sort of thing. |
H: Another basic short exact sequence problem
In the following commutative diagram of R-modules, all of the rows and columns are exact. Prove that $K$ is isomorphic to $L$.
\begin{array}{ccccccccccc} &&&&&&&&0 &&\\
&&&&&&&&\downarrow &&\\ &&&&&& 0 & & L &&\\ &&&&&& \downarrow && \downarrow &&\\ &&&& M^{\prime\prime} & \rightarrow& N^{\prime\prime} & \rightarrow & P^{\prime\prime} & \rightarrow& 0\\ &&&& \downarrow & & \downarrow && \downarrow &&\\ && 0 & \rightarrow & M & \rightarrow & N & \rightarrow & P & \rightarrow & 0\\ &&&& \downarrow & & \downarrow && \downarrow &&\\ 0 & \rightarrow & K & \rightarrow & M^\prime & \rightarrow & N^\prime& \rightarrow & P^\prime & \rightarrow & 0\\ &&&& \downarrow & & \downarrow && \downarrow &&\\
&&&& 0 & & 0 && 0 && \end{array}
Again I have no idea how to do, it seems so complicated, please helps.
AI: HINT The snake lemma tells us that there is an exact sequence $$0 \to L \to M^\prime.$$
HINT 2 Kernels of $R$-modules have a universal property.
Added: The universal property of a kernel is this. A kernel of a morphism $f:M \to N$ is a morphism $i:K \to M$ such that $f \circ i = 0$ and such that is universal with respect to this property, meaning that if $i^\prime:K^\prime \to M$ is any other morphism with $f \circ i^\prime = 0$, then there exists a unique morphism $u:K^\prime \to K$ such that $i^\prime = i \circ u$.
By standard category theoretic considerations, the universal property implies that any two kernels are uniquely isomorphic. For more on this, see the Wikipedia article on kernels. |
H: distance travelled after nth bounce
A ball is thrown vertically to a height of $625$ meters from ground. Each time it hits the ground it bounces $\frac{2}{5}$ of the height it fell in the previous stage. How much will the ball travel during the first $20$th bounces? How can we derive a formula for finding this?
AI: Formula for $n$ bounces for your question: $$\displaystyle 625\left(1+2\sum_{k=1}^{n} \left(\frac{2}{5}\right)^k\right)$$ |
H: equality between the index between field with $p^{n}$ elements and $ \mathbb{F}_{p}$ and n?
can someone explain this?
$ \left[\mathbb{F}_{p^{n}}:\mathbb{F}_{p}\right]=n $
AI: $$|\Bbb F_p|=p\;,\;\;|\Bbb F_n^n|=p^n$$
amd since any element in the latter is a unique linear combination of some elements of it and scalars from the former, it must be that those some elements are exactly $\,n\,$ in number. |
H: Regularity of balanced binary strings
How can one tell which number of propositional variables is necessary
to express a Boolean function given as a sequence of 0s and 1s (a
binary string) of length $2^n$ as a Boolean formula?
The extremal cases are clear: if no 0s or no 1s are present, we need no variables at all, if there is exactly one 0 or one 1, we need all $n$ variables. I am especially interested in the case that there are equally many 0s and 1s (balanced binary strings). This is the only case, where one variable may suffice – but also all may be necessary.
I start from the assumption that it's a matter of regularity or complexity of the binary string.
Assign to each binary string $\sigma$ of length $2^n$ recursively a sequence of symbols:
a + when $\sigma = xx$
a - when $\sigma = x\overline{x}$ with $\overline{x}$ the "negation" of $x$.
an S otherwise (S for stop)
Proceed with $x$ in the first two cases.
One needs exactly one variable if the sequence has length $n$ and contains only +'s with one single -. This corresponds to maximal regularity.
Conjecture: One needs $n - n_+$ variables, $n_+$ the number of +'s in the sequence.
Is this approach of a "recursively defined regularity" sensible, and if so: under which name is it already known, eventually?
AI: The first rule corresponds to checking if the first variable is needed in the formula or not... but not whether any of the others might be redundant.
As a simple counter-example, consider the function $f(A,B,C,D)=AB \vee \bar{A}C$. The corresponding binary string should be 0011001100001111, which is neither of the $xx$, nor of the $x\bar{x}$ form (so $n_{+}=n_{-}=0$). Yet, only three variables are needed to represent it.
Of course, the rules can be amended to check for additional types of redundancies (e.g. checking if every digit is doubled corresponds to checking if the last variable is relevant), but doing so pretty much boils the problem down to a tautology -- if the function value does not depend on a variable, we don't need to use that variable :-) |
H: How prove this $ab+bc+cd\le\dfrac{5}{4}$
let $a,b,c,d\in \Bbb R$ and $a,b,c,d>-1,a+b+c+d=0$
prove that
$$ab+bc+cd\le\dfrac{5}{4}$$
I have this solution
if $b\le c$, then
$$ab+bc+cd=a(b-c)-c^2\le -(b-c)-c^2=-(c-\dfrac{1}{2})^2+\dfrac{1}{4}-b\le\dfrac{1}{4}-b\le \dfrac{5}{4}$$
and then
$b>c$,as the same methods.I think this equality have other nice methods.Thank you
AI: First we rewrite $ab+bc+cd= (a+c)(b+d) - ad = -(a+c)^2-ad$.
This expression can only be positive if either $a$ or $d$ is negative. Thus wlog we choose $d<0$ and $a>0$. Now we have $-(a+c)^2-ad <a-(a+c)^2$. If $a\leq1$, then this expression is $\leq1$ as well, thus wlog we assume $a>1$. In this case $a-(a+c)^2 < a-(a-1)^2 = -a^2+3a-1 = -\left(a-\frac{3}{2}\right)^2+\frac{5}{4}\leq \frac{5}{4}$. |
H: Vector valued Mean value theorem: Norm for the gradient
The wikipedia article on the vector valued Mean value theorem, says
For $f:\mathbb R^n \to \mathbb R^n$, if the gradient is bounded,
$$
\| \nabla f \| \le M,
$$
then
$$
\|f(x)-f(y) \| \le M \|x-y\|.
$$
What is the norm used for the gradient $\| \nabla f \|$?
I tried to look in some other references.
There the matrix-norm is mentionned.
They gave one example, where I don't understand how to get the bound of $\frac14$ on $\|\nabla f \|$.
Define $g:\mathbb R_+^2 \to \mathbb R_+^2$ with
$$
g(x,y)=\left( \frac{1}{4+x+y}, \frac{1}{4+x+y} \right),
$$
then, entry-wise,
$$
\nabla f \le \begin{pmatrix} \frac{1}{16} & \frac{1}{16} \\ \frac{1}{16} & \frac{1}{16} \end{pmatrix}=A,
$$
whence
$$
\|\nabla f \| \le \|A\|=\frac14.
$$
what norm reduces $\|A\|$ to $\frac14$?
AI: For finite-dimensional spaces, matrix norms are equivalent, so the choice does not matter. |
H: Integral of a rational function: Proof of $\sqrt{C}\,\int_{0}^{+\infty }{{{y^2}\over{y^2\,C+y^4-2\,y^2+1}}\;\mathrm dy}= {{\pi}\over{2}}$?
I suspect that
$$\sqrt{C}\,\int_{0}^{+\infty }{{{y^2}\over{y^2\,C+y^4-2\,y^2+1}}\;\mathrm dy}=
{{\pi}\over{2}}$$
for $C>0$.
I tried $C=1$, $C=2$, $C=42$, and $C=\frac{1}{1000}$ with Wolfram Alpha. But how to prove it?
AI: Let $f : \Bbb{R} \to \Bbb{C}$ be an integrable even function. Then we claim that
$$ \int_{0}^{\infty} f\left( y - \frac{1}{y} \right) \, dy = \int_{0}^{\infty} f(x) \, dx. \tag{1} $$
Indeed, let $I$ denote the LHS of $(1)$. Then by the substitution $y \mapsto y^{-1}$ we have
$$ I = \int_{0}^{\infty} f\left( \frac{1}{y} - y \right) \, \frac{dy}{y^2} = \int_{0}^{\infty} \frac{1}{y^2} f \left( y - \frac{1}{y} \right) \, dy. $$
Thus with the substitution $x = y - y^{-1}$, we have
$$ 2I = \int_{0}^{\infty} \left( 1 + \frac{1}{y^2} \right) f\left( y - \frac{1}{y} \right) \, dy = \int_{-\infty}^{\infty} f(x) \, dx = 2 \int_{0}^{\infty} f(x) \, dx $$
and the conclusion follows.
Now you can plug
$$ f(x) = \frac{1}{C + x^2} $$
to $(1)$ to obtain the conclusion:
$$ \int_{0}^{\infty} \frac{y^2}{Cy^2 + (y^2 - 1)^2} \, dy = \int_{0}^{\infty} \frac{dy}{C + (y - y^{-1})^2} \, dy = \int_{0}^{\infty} \frac{dx}{C + x^2} = \frac{\pi}{2\sqrt{C}}. $$ |
H: SO(2,1) not connected
I am trying to show that $SO(2,1)$ is not connected but I have no idea where to start really, I know that it is connected if there is a path between any two points. My definition of $SO(2,1)$ is:
$SO(2,1)=\{X\in Mat_3(\mathbb{R}) \mid X^t\eta X=\eta, \ \det(X)=1\}$ where $\eta$ is the matrix defined as: $$\left ( \begin{array}{ccc} 1 &0&0\\0&1&0\\0&0&-1\end{array}\right )$$
Thanks for any help
AI: Consider the orbit of the vector $(0,0,1)$ under $SO(2,1)$; you should find that it's disconnected (note that there are elements of $SO(2,1)$ which map $(0,0,1)$ to itself, or to $(0,0,-1)$, and then show that it can not be mapped to any vector $(a,b,0)$). So this gives us a continuous map from $SO(2,1)$ to a disconnected space, which implies that $SO(2,1)$ is disconnected. |
H: How to evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?
How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?
AI: HINT:
As we are dealing with the limit in real numbers, $x>0\implies x\to+\infty$
Put $x=y^2$
$$\implies \sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}$$
$$=\sqrt{y^2+y}-\sqrt{y^2-y}$$
$$=\frac{y^2+y-(y^2-y)}{\sqrt{y^2+y}+\sqrt{y^2-y}} \text{ (Rationalizing the numerator )}$$
$$=\frac2{\sqrt{1+\frac1y}+\sqrt{1-\frac1y}} (\text{Dividing the numerator & the denominator by } y)$$
Now, as $x\to\infty, y\to\infty$
Alternatively, put $x=\frac1{h^2}$
$$\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}$$
$$= \frac{\sqrt{1+h}-\sqrt{1-h}}h$$
$$= \frac{1+h-(1-h)}{(\sqrt{1+h}+\sqrt{1-h})h} \text{ (Rationalizing the numerator )}$$
$$=\frac2{\sqrt{1+h}+\sqrt{1-h}}\text{ if }h\ne0$$
As $x\to\infty, h\to0\implies h\ne0$ |
H: Looking for example of an order homomorphism that doesn't preserve joins.
I know that not every order homomorphism preserves joins. But, I can't think of an example!
Both minimal examples and 'natural' examples welcome.
AI: Let $S$ be the poset
and let $T$ be the poset
Then the map $\phi:S\to T$ defined by
$$\phi(a)=y,\quad \phi(b)=z,\quad\phi(c)=w$$
is an order homomorphism, but
$$\phi(a\vee b)=\phi(c)=w\neq x=\phi(a)\vee\phi(b).$$ |
H: Logarithmic equation. Need to know if i am teaching right
Two of my friends is studying for a test. They asked me about a simple question. But they told me that i was wrong on a question. I could be wrong. But i need you guys to make sure that they learn the right stuff. So if i was right. I then can tell them how to do the equations
The question:
Assume that $\log x = 3$ and $\log y = 4$
Calculate the following equation $\log x^4 + 2\log y - \log(xy)$
I got it to be
$$\log \left( \frac{x^4 y^2}{x y} \right)$$
Then they just change the $x$ and $y$ to the assumed value.
So am i right or am i wrong
AI: But they aren’t given the values of $x$ and $y$: they’re given the values of $\lg x$ and $\lg y$. Specifically, $\lg x=3$ and $\lg y=4$, so
$$\lg x^4+2\lg y-\lg(xy)=4\lg x+2\lg y-(\lg x+\lg y)=12+8-7=13\;.$$
It’s perfectly true that
$$\lg x^4+2\lg y-\lg(xy)=\lg\frac{x^4y^2}{xy}\;,$$
but this does not really help them to solve the problem. |
H: Quick Conditional Probability Question
Been going through lecture notes (ones I made myself, so who knows how accurate they are - or rather - they most certainly are not) and can't seem to understand this one example
Example: Two horses - A and B. A wins with probability 0.5, B wins with probability 0.3. What is the probability that B wins?
The calculation that follows:
$$P(B|A^C)=\frac{P(B\cap A^C)}{P(A^C)}=\frac{0.3}{0.5}=0.6 $$
Now, I have no trouble with the theory, just understanding what is actually meant by the question. My questions:
1) Should the question really be "What is the probability that B wins, given that A doesn't?"
2) Is $P(B)=P(B\cap A^C)$ since $P(A\cap B)=\emptyset$ (or is that true?)
Perhaps 1) and 2) are obviously true and I just forgot to add that sentence in my notes. But I want to be sure (and be able to sleep at night :) )
Thanks for any help.
AI: Both observations are correct and "obvious". But what's obvious at the time (or to an expert) isn't always obvious when you're revising. Let this be a lesson to you about high quality note-taking. (Even if it makes me a massive hypocrite) |
H: Using the hypothesis $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=\frac{1}{a+b+c}$ to prove something else
Assuming that $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=\frac{1}{a+b+c}$$
Is it possible to use this fact to prove something like:
$$\frac{1}{a^{2013}}+\frac{1}{b^{2013}}+\frac{1}{c^{2013}}=\frac{1}{a^{2013}+b^{2013}+c^{2013}}$$
Just curious. Thanks for the help!
AI: Well, assuming that $a,b,c\neq 0$, we can see that:
\begin{align}
\frac{1}{a}+\frac{1}{b}+\frac{1}{c}&=\frac{1}{a+b+c}
\\ \\
\frac{ab+bc+ac}{abc}&=\frac{1}{a+b+c}
\\ \\
a^2b+abc+a^2c+ab^2+b^2c+abc+abc+bc^2+ac^2&=abc
\\ \\
(ab+b^2+ac+bc)c+(ab+b^2+ac+bc)a&=0
\\ \\
(ab+b^2+ac+bc)(c+a)&=0
\\ \\
(a+b)(b+c)(c+a)&=0
\end{align}
Now we know that $(a+b)(b+c)(c+a)=0$
$\therefore a=-b\text{ or }b=-c\text{ or }a=-c$.
We can use the fact $a=-b$ to our LHS:
$$\frac{1}{(-b)^{2013}}+\frac{1}{b^{2013}}+\frac{1}{c^{2013}}=-\frac{1}{b^{2013}}+\frac{1}{b^{2013}}+\frac{1}{c^{2013}}=\frac{1}{c^{2013}}$$
$$\text{RHS: }\frac{1}{-b^{2013}+b^{2013}+c^{2013}}=\frac{1}{c^{2013}}=LHS$$ |
H: General Solution of Diophantine equation
Having the equation:
$$35x+91y = 21$$
I need to find its general solution.
I know gcf $(35,91) = 7$, so I can solve $35x+917 = 7$ to find $x = -5, y = 2$. Hence a solution to $35x+91y = 21$ is $x = -15, y = 2$.
From here, however, how do I move on to finding the set of general solutions? Any help would be very much appreciated!
Cheers
AI: You really ought to have checked your particular solution, because it isn’t one:
$$35(-15)+92\cdot2=-343\;,$$
not $21$. Divide the original equation by $7$ to get $5x+13y=3$. By inspection $x=-2$, $y=1$ is a solution. Suppose that $x=-2+a$, $y=1+b$ is also a solution. Then
$$5(-2+a)+13(1+b)=3\;,$$
so
$$-10+5a+13+13b=3\;,$$
and therefore $5a+13b=0$. Thus, $b=-\frac5{13}a$. Since $a$ and $b$ must be integers, this says that $a$ must be a multiple of $13$. Say $a=13k$. What does that make $b$? Can you now write down the general solution in terms of $k$? |
H: A basic question on the definition of $E[X]$
My question is regarding the definition of $E[X]$ in a probability book. It starts with the definition in case of a simple random variable (a random variable which takes only finite number of values) where $E[X]$ is defined by :
$$E[X]=\sum_{i=1}^{n}x_i P(X=x_i)$$
Now a 3-dimensional figure is drawn (I am confused about how to describe the figure in words) and it is said that from that figure it can be understood that the previous sum can be viewed as an integral over $\Omega$. With this motivation the following equation is written :
$$E[X] = \int _{\Omega}X(\omega)P(d\omega)$$
I am confused about the whole method of defining $E[X]$.
Actually, my question was why a "3 dimensional figure" is needed to describe expectation ? The finite sum I have mentioned can be thought of an uncountable sum $X(\omega)P(d\omega)$ over all $\omega$. Now, if our probability measure is Lebesgue measure, then the uncountable sum is equivalent to $X(\omega)d\omega$ over all $\omega$ which is nothing but $\int_{\Omega}X(\omega)d\omega$ which is equal to $\int_{\Omega}X(\omega)P(d\omega)$
AI: In the first case the probabilities are assigned as discrete values.
In the second case the probability of an event (outcome) is determined by a probability measure $P(d\omega)$, which can be a quite general construction. Often it amounts to ordinary integration with respect to a (nonnegative) weight called a probability density function $p(\omega)$, so that the integral can be written:
$$ E[X] = \int_\Omega X(\omega) p(\omega) d\omega $$
where $\omega$ represents a point in the "event space" $\Omega$.
In a first year graduate course in real analysis one is typically taught the theory of integration at a level that allows discrete and continuous measures (and probablity measures in particular) to be combined. |
H: Fermat Last Theorem for non Integer Exponents
We now that Fermat's last theorem is true so there are not positive integer solutions to
$$x^n+y^n=z^n$$
for $n\in\mathbb{N}$ and $n>2$.
But what about if $n\in\mathbb{R}$ or $n\in\mathbb{R}^+$?
AI: Suppose $z> \max(x,y)$ then $x^0+y^0 = 2 > z^0$ but there exists some $N$ such that $x^N+y^N<z^N$. Therefore there exists some $n\in[0,N]$ satisfying $x^n+y^n=z^n$. |
H: What's more robust than a structural homomorphisms?
This question isn't category theory; but, category theoreticians tend to be interested in mathematical structure, so I thought the answer might exist within that knowledge base.
Given two mathematical structures $X$ and $Y$ with the same pattern of airities, there is a natural notion of homomorphism $X \rightarrow Y$, described here, called a structural homomorphism. However, this notion lacks "robustness."
For instance, suppose that $X$ and $Y$ are partially ordered sets and that $f$ is a structural homomorphism $f : X \rightarrow Y$. Then $f$ is an order homomorphism. Suppose also that $X$ and $Y$ happen to be lattices. Then we can add meets and joins to the data of $X$ and $Y$, thereby obtaining new structures $X'$ and $Y'$. However, its feasible that $f$ might fail to be a homomorphism $X' \rightarrow Y'$, since not every order homomorphism is a lattice homomorphism.
So in general, if $f$ is a structural homomorphism $X \rightarrow Y$, and we extend $X$ and $Y$ by defining new relations/operations in terms of the old ones, thereby obtaining new structures $X'$ and $Y'$, well even if the new relations/operations are defined in terms of the old ones, using exactly the same definitions, nonetheless $f$ may fail to be a homomorphism $X' \rightarrow Y'$.
Now I originally thought that the notion of a "structural embedding" described here in that same article, might be robust with respect to the defining of new relations/operations. But that's completely wrong.
So my question is, what's more robust than a structural homomorphism?
AI: Note that the example given by Zev in the previous thread is a structural embedding. So the same example shows that embedding between posets need not preserve meet/join.
For a stronger notion you should probably talk about theories as well, and require the embedding to preserve truth values of sentences, in the sense of an elementary embedding. Then you can extend definable operations from one structure to the next. |
H: Finding the Euler Lagrange equation - differentiation
I'm teachin myself the basics of Calculus of variations. So far I know how to calculate the Euler Lagrange equation for simple functionals.
I'm now stuck on how to compute the total differentiation of the following problem:
$$I[y]=\int_0^1 (y\frac{dy}{dx})^2 -\lambda y^2 \ dx$$
To calculate the Euler Lagrange equation I have the following:
$$\frac{d}{dx}(2y^2y')-2y^2\frac{dy}{dx}+2\lambda y=0$$.
Is this correct? If so I'm unsure of how to evaluate the total differentiation part. That is taking the total derivative in this case.
AI: For a functional
$$
I[y]=\int_a^bf(x,y,y')
$$
the Euler-Lagrange equation is defined as
$$
\frac{\partial f}{\partial y}-\frac{d}{dx}\left(\frac{\partial f}{\partial y'}\right)=0
$$
For the $I$ you gave above, $f(x,y,y')=(yy')^2-\lambda y^2$, hence
the E-L equation for $I$ is
$$
2y\left(\frac{dy}{dx}\right)^2-2\lambda y-\frac{d}{dx}\left(2y^2\frac{dy}{dx}\right)=0
$$
Your only mistake is that $\frac{\partial f}{\partial y}=2\color{red}{y(y')^2}-2\lambda y$, not $2\color{red}{y^2y'}-2\lambda y$ as you have written. Does this help you go from here?
Edit. Since $f$ does not depend upon $x$, we can simplify the problem.
Lemma. For a functional
$$
I[y]=\int_a^b f(y,y')
$$
which is independent of $x$, we have that
$$
\frac{d}{dx}\left(y'\frac{\partial f}{\partial y'}-f\right)=0
$$
Proof. Just calculate the derivative
$$
\frac{d}{dx}\left(y'\frac{\partial f}{\partial y'}-f\right)=y''\frac{\partial f}{\partial y'}+y'\frac{d}{x}\frac{\partial f}{\partial y'}-\frac{\partial f}{\partial x}-y'\frac{\partial f}{\partial y}-y''\frac{\partial y}{\partial y'}
$$
Since $f$ is independent of $x$, $\frac{\partial f}{\partial x}=0$, and we can take out a factor of $y'$
$$
\frac{d}{dx}\left(y'\frac{\partial f}{\partial y'}-f\right)=y'\left(\frac{d}{dx}\frac{\partial f}{\partial y'}-\frac{\partial f}{\partial y}\right)
$$
which is clearly equal to $0$ by E-L. QED
This means that
$$
y'\frac{\partial f}{\partial y'}-f=\text{constant}
$$
Applying this to your function $I$ gives
$$
\begin{align*}
y'(2y^2y')-(yy')^2+\lambda y^2&=C \\
y^2\left(\left(\frac{dy}{dx}\right)^2+\lambda\right)&=C \\
\frac{dy}{dx}&=\sqrt{\frac{C}{y^2}-\lambda}
\end{align*}
$$ |
H: Question about the definition of representability of a quadratic form
Say I have an integral quadratic form $q$ on $\mathbb{Z}^r$ and another integral binary quadratic form $\tilde{q}$. What does it mean for $q$ to primitively represent $\tilde{q}$? I can't seem to find the definition anywhere. Of course $q$ primitively represents an integer $m$ if there exists a primitive element $\alpha\in\mathbb{Z}^r$ such that $q(\alpha)=m$, but I'm not sure what it means to represent another quadratic form (especially when the other quadratic form is not defined on the same amount of variables!).
AI: You take the Gram matrix, which is generally taken to be the Hessian matrix of second partial derivatives. Let $A$ be the Gram matrix for your $q,$ and $B$ be the Gram matrix for the other one. Representation is a rectangular matrix $P$ such that $$ P^T A P = B. $$ Primitive representation is if the GCD of all the entries of $P$ is $1.$ Where are you reading about this?
The case with which you are familiar is when $P$ is a column vector and $B$ is a 1 by 1 matrix, that is, a number.
On page 18, remark 35, you need Estes and Pall (1973), which I put at TERNARY. The principal forms are the squares in the class group. The spinor kernel is the fourth powers in the group. Your $\bar{s}(D)$ is the number of spinor genera per genus, which is the same as the number of squares divided by the number of fourth powers. |
H: Two random variable with the same variance and mean
Let $Y\in L^{2}(\Omega,\Sigma,P)$ and let $E[Y^2|X]=X^2$ and $E[Y|X]=X$. Could we prove that $Y=X$ almost surely.
My partial answer:
By the definition of conditional expectation we have $E[Y^2]=\int_{\Omega} E[Y^2|X] dP=\int_{\Omega}X^2 dP=E[X^2]$ and $E[Y]=\int_{\Omega} E[Y|X] dP=\int_{\Omega}X dP=E[X]$. Hence, Var$(X)=$Var$(Y)$.
AI: We have that
$$E((X-Y)^2) = E(X^2) + E(Y^2) - 2 E(XY) = 2E(X^2) -2 E(XY)$$
the last equality being for the things you already proved.
On the other hand
$$ E(XY) = E ( E(XY|X) ) = E (X E(Y|X) ) = E(X \cdot X) = E(X^2) $$
The second equality is due a property of conditional expectations (that is essential!). Finally the variables are equal in $L^2$ which implies that they are equal a.s. |
H: Prove that nearly all positive integers are equal to $a + b + c$ where $a | b$ and $b | c$, $a \lt b \lt c$
If a positive integer $n$ is equal to $a + b + c$ where $a | b$, $b | c$ and $a \lt b \lt c$, let it be called "faithful". Prove that nearly all numbers are faithful and list the non-faithful numbers.
Here's where I am:
Let the number be $n$, $b = ap$ and $c = bq = apq$ where $p ,q \in Z_+$.
Let the set of faithful numbers be $F$.
$$n = a + b + c$$
$$\therefore n = a + ap + apq$$
Since $1 \lt p \lt q$ and $p, q \in Z_+$, $n \ge 7$. Hence $1, 2 \ldots 7 \notin F$.
Now what?
AI: Let $a=2^k$ for some $k\ge 0$, and let $p=2$. Then $a+ap+apq=2^k(3+2q)$, where we must have $q\ge 2$. $\{3+2q:q\ge 2\}$ is the set of odd numbers greater than $5$, so every number of the form $2^k(2m+1)$ with $m\ge 3$ is faithful. The potential exceptions are therefore the numbers of the form $2^k,2^k\cdot3$, and $2^k\cdot 5$ for $k\ge 0$.
Suppose that one of these numbers can be written in the form $a+ap+apq$ for some $a\ge 1$ and $p,q\ge 2$. Then it can be written as $a(1+p+pq)$, where $1+p+pq\ge 7$.
If $k\ge 4$, $2^k=2^{k-4}\cdot16=2^{k-4}(1+3+3\cdot4)$, so we may take $a=2^{k-4}$, $p=3$, and $q=4$. Thus, $2^k$ is faithful for $k\ge 4$, and we need consider only $2,4$, and $8$ individually; all are easily seen to be unfaithful.
Similarly, $2^k\cdot3=3\cdot2^{k-4}(1+3+3\cdot4)$, so we may take $a=3\cdot2^{k-4}$, $p=3$, and $q=4$ and catch all numbers of the form $2^k\cdot3$ except $12$ and $24$, and $2^k\cdot5=5\cdot2^{k-4}(1+3+3\cdot4)$, so we may take $a=5\cdot2^{k-4}$, $p=3$, and $q=4$ and catch all numbers of the form $2^k\cdot 5$ except $10,20$, and $40$.
We’ve now shown that all positive integers are faithful except $1,2,3,4,5,8$, and possibly $10,12,20,24$, and $40$. $10=1+3+6$ is faithful. It’s not hard to check that $12$ is not faithful: its only factor larger than $6$ is itself, so $a$ would have to be $1$, and $11=p(1+q)$ would have to be composite. $20=2\cdot20=2+6+12$ is faithful, and similarly $40$ is faithful. That leaves only $24$ to be checked, and I’ll leave it to you. |
H: What is the Sobolev Lemma?
In the paper I am reading the authors state that $|\nabla u|_\infty$ can be replaced by $|u|_3$ using the Sobolev Lemma. I am trying to find this lemma but its turned out to be very difficult.
The context is the following:
a smooth bounded domain $\Omega \subset \mathbb{R}^3$
$|\cdot|_s$ denotes the Sobolev norm of the space $W^{s,2}(\Omega)=H^2(\Omega)$ and $|\cdot|_\infty$ the norm in $L^{\infty}(\Omega)$
$u$ is a vector valued function (the velocity of a fluid)
This has to be one of the many imbedding theorems which should give
$$|\nabla u|_\infty \leq C \: |u|_3$$
where $C$ is a constant depending on $\Omega$ alone I suppose.
I'd appreciate if you can give me a reference as well. ThX!
AI: I'm assuming that your function $u$ belongs to $W^{3,2}(\Omega)$. We have the following result that can be found in any good book of Sobolev spaces (for example in Leoni's book or even in Brezi's book, but in the later you have to iterate the estimates that he find only to $W^{1,p}(\Omega)$).
If $\Omega$ is a bounded regular domain, $p\geq 1$, $k>\frac{n}{p}$ then $$W^{k,p}(\Omega)\hookrightarrow C^m(\overline{\Omega}),\ \forall\ 0\leq m<k-\frac{n}{p}$$
In your case: $p=2$, $n=3$ and $k=3$, then $m\in[0,3/2]$, which implies your result. |
H: Eigenvector Proof $(I+A)^{-1}$.
Show that the eigenvectors of the $n \times n$ matrix A are also eigenvectors of the matrix $$M = (I+A)^{-1} $$ Where I is the $n \times n$ unit matrix. Determine the eigenvalues.
My Work:
$$Mx=(I+A)^{-1}x = ???$$
AI: Hint: if you have
$$
Ax=\lambda x
$$
then what do you get when you add $x$ to both sides?
$$
(I+A)x = (1+\lambda)x
$$
Now do a bit of matrix algebra to get $(I+A)^{-1}$ on the right and $1+\lambda$ on the left.
(Alternative way to think of it - if $(I+A)^{-1}x = \Lambda x$, then how can you rearrange this to get $Ax$ on its own?) |
H: What is the meaning of 'columns have unit lengths'
What is the meaning of this?
In random projection, the original d-dimensional data is
projected to a k-dimensional (k << d) subspace through
the origin, using a random k × d matrix R whose columns
have unit lengths.
I have already searched around the internet and can't find a straight answer.
Does it mean that each column sum of values is equal to one?
AI: Columns have unit length, typically means, the following:
If $a_i$ is the $i^{th}$ column in a matrix, then $\Vert a_i \Vert_2 = 1$, where $\Vert \cdot \Vert_2$ is the two-norm of a vector.
So for instance, the matrix below has columns of unit length:
$$A = \begin{bmatrix}1/2 & \dfrac1{\sqrt2}\\ -\dfrac12 & 0\\ -\dfrac1{\sqrt2} & \dfrac1{\sqrt2}\end{bmatrix}$$
since
$$\sqrt{\left(\dfrac12\right)^2 + \left(-\dfrac12\right)^2 + \left(-\dfrac1{\sqrt2}\right)^2} = \sqrt{\dfrac14 + \dfrac14 + \dfrac12} = 1$$
$$\sqrt{\left(\dfrac1{\sqrt2}\right)^2 + 0^2 + \left(\dfrac1{\sqrt2}\right)^2} = \sqrt{\dfrac12 + 0 + \dfrac12} = 1$$ |
H: Write ‘There is exactly 1 person…’ without the uniqueness quantifier
During a lecture today the prof. posed the question of how we could write "There is exactly one person whom everybody loves." without using the uniqueness quantifier.
The first part we wrote as a logical expression was "There is one person whom everybody loves.", ignoring the 'exactly one' part of the question initially. From this he wrote
$L(x,y): x$ loves $y$; domain for $x$ and $y$: $\{\text{people}\}$
$\exists x\forall y: L(y,x)$
Which I understand to mean 'There is a person $x$ such that for all $y$, $x$ is loved by $y$' AKA 'There is a person who is loved by everyone'. I get that part.
The part I don't get is how the expression of 'exactly one'.
$\forall z(\forall y(L(y,z))\to x =z)$
which then creates the joint expression
$\exists x\forall y(L(y,x))\land \forall z(\forall y(L(y,z))\to x=z)$
I just can't seem to understand how $\forall z(\forall y(L(y,z))\to x =z)$ means exactly one here. I suppose you can take $\forall z$ here to mean 'for any given person', which means the $\forall z$ is considering every person in the world. This would translate the second expression block to something like, "For any given person $z$, if everybody loves $z$ then $z$ is the same person as $x$".
To me though $\forall z$ generally means for every element in the domain which I see as meaning every person in the world simultaneously, as it seems to for $y$. Is that just plain wrong? How can I tell when $\forall$ means 'all' and when it means 'for any (one)'? In the previous English translation the only reason I was able to translate it (if it's even right) is because I already knew what the statement was suppose to mean.
Is it just that $\forall z$ means that this statement could be true for any element, and if so what's the difference between $\forall z$ and $\exists z$? Someone told me that $\exists z$ would be redundant because the expression says $x=z$ but how do I know that $x$ and $z$ are automatically the same person if $\exists$ is used for both?
Sorry if this is a bit long with too many questions. I just wanted to try to make the cause of my confusion as clear as possible so you can help me figure this out.
AI: Is it just that $\forall z$ means that this statement could be true for any element,
Yes, exactly.
...and if so what's the difference between $\forall z$ and $\exists z$?
If we only asserted the existence of some particular $z$ such that if $y$ loves $z$, then that particular person $z$ would thus be $x$. But then we are not ruling out that there might be another person, different from $x$, that is also loved by everyone. And we have already asserted the existence of someone (namely, x) who is loved by all. So in that sense, $\exists z$ such that... is reduntant.
We need the universal quantifier for $z$ (to assert a statement $\forall z$) in the second clause to indicate that if there is any $z$ (which means that the claim that follows - as it relates to $z$, is true for every z) $z$ such that $L(y, z)$, then any/every such $z$ must be $x$, since there is exactly one person, namely $x$, who is loved by all $y$. I.e., for every $z,$ if every y loves z, then z must be x: i.e., that $z$ is not anyone other than $x$. This gives us that $x$ (whose existence we asserted at the start) is therefore the one unique person loved by all.
Now, just one oversight to "clean up" your expression, which you state as:
$$∃x(\forall y L(y,x))\land ∀z(∀y(L(y,z))⟹x=z)$$
But here we have two independent clauses that creates a problem, because in your second clause, you have $x$ appear outside the scope of its quantifier. I.e., it is a free, unquantified varible.
So we want the scope of $\exists x$ to persist over the entire statement, hence the square brackets below.
That is, $$\exists x \big[\forall y(L(y, x))\land \forall z(\forall y(L(y, z)) \rightarrow z = x)\big]$$ |
H: How to show this inequality?
Show that $$-2 \le \cos \theta(\sin \theta+\sqrt{\sin^2 \theta +3})\le2$$
Trial: I know that $-\dfrac 1 2 \le \cos \theta\cdot\sin \theta \le \dfrac 1 2$ and $\sqrt 3\le\sqrt{\sin^2 \theta +3}\le2$. The problem looks simple to me but I am stuck to solve this. Please help. Thanks in advance.
AI: $ \left[\cos \theta\sin \theta+\cos\theta\sqrt{\sin^2 \theta +3})\right]^2\le \left(\cos^2\theta+\bigg(\sqrt{\sin^2\theta+3}\bigg)^2\right)\left(\cos^2\theta+\sin^2\theta\right)=4$ by the Cauchy-Schwarz inequality which gives you the desired result. |
H: Why don't we consider the zero subspace (which is readily $T$-invariant) in the definition of direct sum of linear operators?
Why don't we consider the zero subspace (which is readily $T$-invariant) in the definition of direct sum of linear operators?
REF: Schaum's Outline of Linear Algebra
AI: Technically, you could.
A decomposition of an operator is much like a factorization of a natural number. We can consider factorizations of numbers with '1' as one of the factor, but then we really are interested in is non-trivial factorizations.
I would prefer your definition, allowing for zero subspaces, but we don't lose much either way. I suspect that is why (nonzero) is in parentheses - if they felt it was a hard rule, they wouldn't need to parenthesize that, would they?
Essentially, we are interested in whether an operator is "decomposable." Non-decomposable operators are, in some sense, primes. But all operators are decomposables if we allow for the null space, so we would then need to define non-decomposable as "only has trivial decompositions." |
H: Cauchy’s functional equation for non-negative arguments
Function $f:[0,+\infty)\rightarrow\mathbb{R}$ satisfies $f(x+y)=f(x)+f(y)$ for every non-negative $x$ and $y$. It’s bounded from below with some non-positive constant $m$. Does it imply that $f$ has the form $f(x)=cx$ or is there another function satisfying these conditions?
AI: Yes.
It follows easily from induction that $f(nx)=nf(x) \, \forall n \in \mathbb{Z}^+$. Thus $f(\frac{m}{n})=\frac{1}{n}f(m)=\frac{m}{n}f(1)$.
If $f(y)<0$ for some $y \in[0, +\infty)$, then $f(ny)=nf(y)<m$ for sufficiently large $n$. Thus $f(y) \geq 0 \forall y \in [0, +\infty)$. Now if $a \geq b$ then $f(b+(a-b))=f(b)+f(a-b) \geq f(b)$, so $f(x)$ is monotonic. This is sufficient to imply that $f(x)=xf(1)$ (where here we require $f(1) \geq 0$). |
H: Which topics of mathematics should I study?
I'm a first year econometrics student with a great interest in mathematics. I very much enjoy my study, but still I am interested to learn about more topics in mathematics which are not part of my study. Some of the main topics which I am already familar with or which will soon be covered in my study are: calculus, linear algebra, optimization, statistics/probability/combinatorics.
Which topics in mathematics would you advise me to study still (that are not part of this list)? I have a general interest in mathematics, so any advice on interesting and or essential topics in mathematics that are worth studying is appreciated.
If possible, could you also give me advice on books/references which I should study from?
Thanks in advance.
Edit: To be more specific, I am looking for topics on which the general consensus is that they are essential to know of for any mathematician or very interesting to study.
AI: You'll just need to get some universities syllabi an pick some topics to study. The syllabi do have a lot of subjects but they also provide you an order in which they should be studied that will restrain a little of your choices. You won't be lost with a diversity of fields of study because you'll have to cover some fundamentals first.
Cambrige and Oxford have nice materials for guiding your study - It'll also be useful in the case that you know some of the first subjects, you'll be able to pick more advanced stuff. The folowing resources are going to be very useful:
How to Become a Pure Mathematician
This website has book recomendations and also information on what order it should be studied.
All the Mathematics You Missed: But Need to Know for Graduate School
This book comments on the importance of some math fields for the mathematical study.
Cambridge Syllabus
Oxford Syllabus
I've found both syllabi enlightening and informative, with them you're going to have something similar to the site I mentioned before: fields of study, the order in which they should be studied, some intruction on what you should be able to do after covering the topics and a little about the importance of them.
There are also some all-in-one books and book collections that you should look:
Mathematics, It's Content Method and Meaning
Fundamentals of Mathematics
The World of Mathematics
Mathematics Form and Function
What it Mathematics?
Princeton Companion to Mathematics
For the end, as a personal suggestion: Don't get afraid, just get the books and start reading, when the things start to become dark you can use the torches of our fellow members to lighten your path! Good Luck! |
H: Comparing $\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$
Without the use of a calculator, how can we tell which of these are larger (higher in numerical value)?
$$\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$$
Using the calculator I can see that the first one is 63.2455453 and the second one is 63.2455532, but can we tell without touching our calculators?
AI: $$\frac{1}{\sqrt{1000}+\sqrt{1001}}<\frac{1}{\sqrt{1000}+\sqrt{999}}$$
$$\implies \sqrt{1001}-\sqrt{1000}<\sqrt{1000}-\sqrt{999}$$
$$\implies \sqrt{1001}+\sqrt{999}<2\sqrt{1000}$$ |
H: example of a flat but not faithfully flat ring extension
I am learning commutative algebra and there is a definition about faithfully flat modules or ring extensions. I can't think of an example of a flat but not faithfully flat ring extension or module. Can someone help me with it? thanks
AI: Take $f: A \rightarrow B$ to be a ring homomorphism such that the corresponding morphism of affine schemes $\operatorname{Spec}B \rightarrow \operatorname{Spec}A$ is not surjective, but only flat. There is an easy way to do this: Remember that localizing a ring $R$ in a multiplicative subset $S$ gives a flat ring homomorphism $R \rightarrow S^{-1}R$. However, this ring homomorphism is faithfully flat iff $S^{-1}R=R$. |
H: Minimal Polynomials Annihilating an Abelian Torsion-Free Group
Let $A$ be an abelian torsion-free group. Let $\theta \in\operatorname{Aut}A$. Assume that $\theta$ has a finite period in $\operatorname{Aut} A$, say $n$. Obviously $\theta^n-1$ annihilates $A$ (i.e. $A^{\theta^n-1}=\{0\}$, where $\theta^n-1$ is now the obvious endomorphism of $A$).
Thus, there is a polynomial of minimal degree with integral coefficients annihilating $A$, say $g(\theta)$. Can we assume that it is also a monic polynomial?
I would like to say that, for instance, $g(\theta)\mid \theta^n-1$.
AI: If you've studied injective envelopes or tensor products this can be handled in a smooth way. Let $E(A) = A \otimes \mathbb{Q}$ be the injective envelope of $A$. Then $\phi: A \to A$ can be considered $\phi:A \to E(A)$ and since $A \leq E(A)$ and $E(A)$ is injective, there is an extension $\tilde \phi : E(A) \to E(A)$. Since $E(A)$ is torsion-free the extension turns out to be unique and still an automorphism. Since $E(A)$ is a vector space over $\mathbb{Q}$, you can use all your old minimal polynomial ideas. This works over commutative domains (and Ore domains), but be careful when you use Gauss's lemma if the domain is not a UFD. :-) |
H: How can I find all the solutions of $\sin^5x+\cos^3x=1$
Find all the solutions of $$\sin^5x+\cos^3x=1$$
Trial:$x=0$ is a solution of this equation. How can I find other solutions (if any). Please help.
AI: Hint: $ \sin^5 x\leq \sin^2 x$ and $ \cos ^3 x \leq \cos^2 x $.
Hint: Pythagorean Identity for trigonometric functions. |
H: Is a sequence of all the same numbers monotonic?
I'm wondering based on the definition of monotonicity:
A sequence where $a_n\geq a_{n+1}$ for all $n\in\mathbb{N}$ is monotonic.
So given that the sequence $a_n = 3$ is all the same numbers and is neither increasing or decreasing, is it monotonic?
AI: Yes, a constant sequence (a number repeated indefinitely) is inceed monotonic: it is both monotonic non-decreasing, and monotonic non-increasing.
Hence, one can require that a sequence be strictly monotonic increasing or strictly monotonic decreasing. Under such a restriction, a constant sequence is neither strictly increasing nor strictly decreasing monotonically. |
H: The difference between $\mathbb{Z}$ and $\mathbb{Z}^2$
I know that $\mathbb{Z}$ is the set of integers. But, what does $\mathbb{Z}^2$ mean? How is it different from $\mathbb{Z}$?
Thanks.
AI: If $A$ is a set then $A^2$ is a shorthand for $A\times A$, which is the set of ordered pairs with elements from $A$. That is: $$A^2=\{\langle a,b\rangle\mid a,b\in A\}.$$
So $\Bbb Z^2$ is the set of all ordered pairs whose elements are integers. |
H: Non-commutative rings without identity
I'm looking for examples (if there are such) of non-commutative rings without multiplicative identity which have the following properties:
1) finite with zero divisors
2) infinite with zero divisors
3) finite without zero divisors
4) infinite without zero divisors
I'll be grateful for any examples and hints. Thanks in advance.
AI: As indicated in Jared's comment, one may find easy examples of 1), 2), and 4) by taking non-unital subrings (such as left ideals or two-sided ideals) of rings with identity.
The most interesting request here is 3). In fact, any finite nonzero associative ring $R$ (possibly without identity) without zero divisors is a field.
First, let's prove that $R$ in fact has an identity. Let $a \in R$ be a nonzero element. The function $f \colon R \to R$ defined by $\phi(x) = ax$ is injective, and since $R$ is finite, it's a bijection. Again, because $R$ is finite, this bijection must have finite order. Thus for some $d$, the function $\phi^d(x) = a^d x$ is the identity. It follows that $a^d \in R$ is an identity element.
At this point, it's a standard exercise to show that a finite ring with identity and no zero divisors is a division ring. (Hint: think about the function above for any $a \in R$.) And Wedderburn's little theorem states that any finite division ring is a field. |
H: When does equality holds in $A\subseteq P(\cup A)$
Note: $P$ is power set. It's easy to prove that this inclusion holds. But when is other inclusion true? I can't even think of one example...
AI: Here is one example: $A=\{\varnothing\}$.
Also is $A=V_{\alpha+1}$ for any ordinal $\alpha$, then $A=P(V_\alpha)$, therefore $\bigcup A=V_\alpha$, and the equality holds. |
H: Fourier Transforms of shifted sinc funtions
I would like to calculate the Fourier transform of the following functions:
$$\left(\dfrac{\sin(\pi x\pm\pi n/2)}{\pi x\pm\pi n/2}\right)^2$$
$$\dfrac{\sin(\pi x+\pi n/2)}{\pi x+\pi n/2}\cdot\frac{\sin(\pi x-\pi n/2)}{\pi x-\pi n/2}$$
with $n \in\mathbb{N}$
Any help will be highly appreciated!
AI: Hint 1: the transform of $\sin (x\pi)/(x\pi)$ is a rectangle with height 1 between $-\pi$ and $\pi$.
Hint 2: multiplication in one domain corresponds to convolution in the other domain.
Hint 3: shifting a function by $x_0$ corresponds to multiplication of its Fourier transform with $e^{\pm ix_0\omega}$.
EDIT: For solving (2.) we have the Fourier transforms of the two functions:
$$e^{\pm i\omega n/2}\text{rect}(\omega,\pi)$$
So the convolution becomes
$$\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\Omega n/2}\text{rect}(\Omega,\pi)
e^{-i(\omega-\Omega) n/2}\text{rect}(\omega-\Omega,\pi)d\Omega=\\
=\frac{e^{-i\omega n/2}}{2\pi}\text{rect}(\omega,2\pi)\int_{\max (-\pi,\omega-\pi)}^{\min (\pi,\omega+\pi)}e^{j\Omega n}d\Omega$$
For $-2\pi<\omega < 0$ we get
$$\frac{e^{-i\omega n/2}}{2\pi}\int_{-\pi}^{\pi+\omega}e^{j\Omega n}d\Omega$$
and for $0<\omega<2\pi$ we have
$$\frac{e^{-i\omega n/2}}{2\pi}\int_{\omega-\pi}^{\pi}e^{j\Omega n}d\Omega$$
I trust you can take it from here.
EDIT 2:
OK, some more help and the final solution. The first integral (for $-2\pi<\omega < 0$ ) evaluates to
$$\frac{e^{-i\omega n/2}}{2\pi}\frac{(-1)^n}{in}\left (e^{in\omega} -1\right )=
\frac{(-1)^n}{n\pi}\sin\frac{n\omega}{2}$$
and the second integral for positive $\omega$ is the same, just with the opposite sign. So the final result is
$$\frac{(-1)^{n+1}}{n\pi}\text{sign}(\omega)\sin\frac{n\omega}{2},\quad
|\omega|<2\pi,\quad \text{(zero otherwise)}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.