text
stringlengths 83
79.5k
|
---|
H: complex analysis -bounded but not holomorphic function
What are some examples of functions that are bounded but not holomorphic?
I know that some the not holomorphic functions are
Absolute value function, Piecewise functions (not analytic where the pieces meet)
AI: Well, you can take any bounded discontinuous function. For example,
$f(z) = 1$ if $z = 0$ and $f(z) = 0$ otherwise is bounded and discontinuous and hence not holomorphic. |
H: Number of combinations
You are given K prime numbers, bigger than 6, find the number of different number that can be made of those prime numbers(using 1 number, 2 numbers ..., k numbers).
Obviously you need to get the number of combiantions that you'll get using only 1 digit, 2 digits and so on and at the end add them all. It's easy, but when one or more digits appear more than once I encounter a problem.
Here's example:
You are given:
2 2 3 5 7 11 13
All the combinations are of two digits are: (2,2), (2,3), (2,5), (2,7), (2,11), (2,13), (3,5), (3,7), (3,11), (3,13), (5,7), (5,11), (5,13), (7,11), (7,13), (11,13).
There are 16 combinations, but using the formula for combinations i get 21, because it counts the (2,3), using the first 2 and (2,3) using the second 2 as different combinations.
I see a pattern there, that I need to find the number of combiantions and then subtract the number of combinations with the number that appear more than once and the prime number that are different of that number.. In this case there are 5 such combinations and 21-5=16, gives the right result.
But what when the number of same prime numbers is bigger?. For ex:
2, 2, 2, 3, 3, 5, 7, 7, 11, 13
AI: Your problem is in that you are trying to count the number that can be made with 1 prime, 2 primes, 3 primes, etc. separately. The computation becomes simple when you stop making this distinction.
In your example $2,2,2,3,3,5,7,7,11,13$, the number can have $0,1,2,$ or $3$ twos, $0,1,$ or $2$ threes, etc., so the total number of possible products is
$$
4 \cdot 3 \cdot 2 \cdot 3 \cdot 2 \cdot 2 - 1 = 287
$$
The $-1$ is to subtract the case of multiplying no primes together, i.e. the $1$ case. Each possibility is unique by the fundamental theorem of arithmetic. And as long as you don't actually need the specific values for 2 primes, 3 primes, etc. this method is sufficient. |
H: Simple probability with marbles
I have completely forgotten my statistics knowledge, and I have now found myself in dire need of this without having access to my books.
While I'm pursuing another context, I would like to ask my questions using marbles. Say I've got $100$ marbles of two colors: $6$ black marbles and $94$ white marbles. Then I pick $4$ marbles at random without putting them back. How do I then calculate the probability that $1$ marble is black, $2$ marbles are black, and $3, 4...$?
I feel there should exist some kind of a formula for calculating this probability. I want to be able to plot these probabilities via this formula.
AI: Hints:
The number of ways of picking 4 marbles out of 100 (order unimportant) is ${100 \choose 4}$. This accounts for the "without replacement" specification since the $4$ marbles are distinct.
The number of ways of picking, for example, 2 white marbles and 2 black marbles out of the specified 100 marbles is ${6 \choose 2}{94 \choose 2}$. |
H: Does $x^T(M+M^T)x \geq 0 \implies x^TMx \geq 0$ hold in only one direction?
I know this is true for the "if" part, but what about the "only if"?
Can you give me one example when the "only if" part does not hold? I am not quite sure about this.
I forgot to tell you that $M$ is real and $x$ arbitrary.
AI: $M=\frac{M+M^T}2+\frac{M-M^T}2$ and $x^T\left(\frac{M-M^T}2\right)x=0$ for every real vector $x$ (because $x^TMx=(x^TMx)^T=x^TM^Tx$). Therefore $x^TMx\ge0$ if and only if $x^T\left(\frac{M+M^T}2\right)x\ge0$. |
H: Proof of the discrete Doob inequality
$\mathbf{Theorem:}$ Let $n \in \mathbb{N}, a,b, \in \mathbb{R}, a < b$ and $S_i$ a submartingale for $i=1,2,\dots,n$. Define $$H_n(a,b;S_1,\dots,S_n) = \text{card} \Bigg( \{ (i,j) \in \mathbb{N}^2: 1 \le i <j \le n, S_i \le a, S_j \ge b, a<S_k<b, \forall k = i+1, i+2,\dots, j-1 \} \Bigg)$$ a function that is counting the number of "upward crossings" of the interval $(a,b)$. (i.e. $H_n$ = How many $S_i$ fell below $a$ and then rose above $b$). Then
$$\mathbb{E}[H_n(a,b;S_1,\dots,S_n)] \le \frac{\mathbb{E}(S_n-a)^{+}-\mathbb{E}(S_1-a)^{+}}{b-a} \le \frac{\mathbb{E}(S_n-a)^{+}}{b-a} $$
$\mathbf{Proof:}$ Denote $Z_i = (S_i-a)^{+}$, this way our $H_n(a,b; S_1,\dots,S_n)=H_n(0,b-a; Z_1,\dots,Z_n)$ and define the stopping times $$\Lambda_k=\min\{\min\{i\in \mathbb{N}: i\ge \Psi_{k-1}, Z_i=0\},n\}$$
$$\Psi_k = \min\{ \min \{i\in \mathbb{N}: i \ge \Lambda_{k}, Z_i \ge b-a \},n\}\\ \Psi_0 \equiv 1$$
Now we can proceed proving the Doob inequality:
\begin{align}
Z_n-Z_1 &= Z_{\Lambda_{ \lfloor \frac{n}{2} \rfloor+1} } - Z_{\Psi_0}\\
&= \sum_{i=1}^{\lfloor \frac{n}{2} \rfloor +1} (Z_{\Lambda_k}-Z_{\Psi_{k-1}} )+ \sum_{i=1}^{\lfloor \frac{n}{2} \rfloor} (Z_{\Psi_k}-Z_{\Lambda_{k-1}} ) \\
& \ge \sum_{i=1}^{\lfloor \frac{n}{2} \rfloor} (Z_{\Lambda_{k+1}}-Z_{\Psi_{k}} ) + (b-a)H_n(0,b-a; Z_1,\dots,Z_n)
\end{align}
Taking the expectation of $Z_n - Z_1$ proves the inequality. $\square$
$\mathbf{Q}$: What I don't understand is the magic around the stopping times. Sure, $\Lambda_k$ denotes the first time after $\Psi_{k-1}$ (i.e. $Z_{k-1}$ rose above $b-a$ ) that $Z_k$ returns back to the origin. Conversely, $\Psi_k$ denotes the first time after $Z_{k-1}=0$ that $Z_k$ exceeded $b-a$. So this is how we play around with the notion of "interval crossing."
What I don't get, though, is why $Z_n = Z_{\Lambda_{ \lfloor \frac{n}{2} \rfloor+1}}$?? The subscript. Why the integer of $\frac{n}{2}+1$. Is this because we've got one $k$ that's taking care of the "upward" crossing and another $k$ for the "downward" crossing (so in the end for $k$ crossings, a half would be "bottom $\to$ top" and the other "top $\to$ bottom")?
AI: The equality
$Z_n = Z_{\Lambda_{\lfloor\frac n2\rfloor + 1}}$
is simply a consequence of the fact that
$\Lambda_{\lfloor\frac n2\rfloor + 1} = n$ .
Because there has to be a downcrossing between every two upcrossings we must have $\Lambda_{k+1}\geq\Lambda_k+2$ therefore $\Lambda_{\lfloor\frac n2\rfloor + 1} \geq n$ and equality must hold because $\Lambda_k$ is defined as the minimum of $n$ and the time of the $k$th upcrossing. |
H: easy homework: Equivalence classes, how do they look?
Let's say that I got a
set = { Arnold, Harrison }
and I want to display the equivalence class of [ Harrison ]
The actual condition for the relation doesn't matter in this case so let's just say that {Arnold} is the only relation to Harrison.
This would be displayed as:
[ Harrison ] = { Arnold }
However, if we have this set instead:
set = { Arnold, Harrison, {Arnold, Harrison}, { Arnold, Arnold, Harrison} }
And we say that this time, Harrison has a relation to Arnold but also {Arnold, Harrison} and even {Arnold, Arnold, Harrison}.
How is this expressed as an equivalence class?
[ Harrison ] = {Arnold, Harrison}
or...
[ Harrison ] = { Arnold, {Arnold, Harrison}, {Arnold, Arnold, Harrison} }
or... what?!
AI: Your example (or "the homework text") doesn't seem too enlightening because all the elements in the set seem to be equivalent to each other.
Remember that the equivalence class of an element $s$ in a set $S$ with an equivalence relation $\sim$ is simply the subset of $S$ whose element are the elements of $S$ equivalent to $s$:
$$
[s]=\{x\in S\,\mid\,x\sim s\}.
$$
So, for instance, if $S=\{a,b,c,d,e\}$ and
$a$ is equivalent only to $c$ and itself,
$b$ is equivalent only to $e$ and itself,
$d$ is equivalent only to itself,
then the equivalence classes are $[a]=[c]=\{a,c\}$, $[b]=[e]=\{b,e\}$ and $[d]=\{d\}$.
Furthermore, the quotient set is $S/\sim=\{[a],[b],[d]\}$. |
H: Riemann or Lebesgue integrable
Lets consider the following two functions:
$$f(x)=\begin{cases} x &,x\in[0,1]\setminus\Bbb Q \\ 0 &,x\in[0,1]\cap\Bbb Q\end{cases}$$
$$g(x)=\frac{(-1)^{[x]}}{[x]}$$
where $[x]$ is the integer part of $x$. I am trying to determine which of these integrals are Riemann or/and Lebesgue integrable; $f(x)$ over $[0,1]$ and $g(x)$ over $[1,\infty)$.
Lebesgue for $f(x)$:
$\int_{[0,1]} f dx=\int_{[0,1]\setminus\Bbb Q}x\ dx + \underbrace{\int_{[0,1]\cap\Bbb Q}0\ dx}_{=0}=\int_{[0,1]\setminus\Bbb Q}x\ dx=\int_{[0,1]}x\ dx-\underbrace{\int_{\Bbb Q}x\ dx}_{=0}=\frac12$
Riemann for $f(x)$: I assume this function is not Riemann integrable, because every interval in a partition of $[0,1]$ will contain both rationals and irrationals, but I am not sure how to conclude.
Riemann for $g(x)$: The function takes the values $-1$ on $[1,2)$, $\frac12$ on $[2,3)$, $-\frac13$ on $[3,4)$, etc. So the integral is $-1+\frac12-\frac13+\frac14-\ldots=\sum_{k=1}^\infty(-1)^k\frac1k=-\log(2)$.
But I have no idea for the Lebesgue integral of $g(x)$.
AI: $g$ is not lebesgue integrable, because the lebesgue integral of a function $g$ is defined as $$
\int_\Omega g \,d\mu := \int_\Omega g_+ \,d\mu - \int_\Omega g_- \,d\mu
$$
where $g^+$ is the positive part of $g$, i.e. $g_+(x) = |g(x)|$ if $g(x) \geq 0$ and $g_+(x)=0$ otherwise, and $g_-$ simiarly is the negative part of $g$. $g$ is only integrable if at most one of the integrals on the RHS diverge. In your case, I believe that both $g^+$ and $g^-$ are $+\infty$.
$f$ is not riemann integrable because it's discontinuous everywhere. |
H: How can partial derivatives feature in the definition of a function?
I have a map $f(t,g,h)$ where $f:[0,1]\times C^1 \times C^1 \to \mathbb{R}.$
I want to define $$F(t,g,h) = \frac{d}{dt}f(t,g,h)$$
where $g$ and $h$ have no $t$-dependence in them. So $g(x) = t^2x$ would not be admissible if you want to calculate what $F$ is. How do I write this properly? Is it correct to write instead
Define $F$ by $F(t,\cdot,\cdot) = \frac{d}{dt}f(t,\cdot,\cdot)$.
But there is some ambiguity in the arguments. What is the best way to write it?
AI: This is called a partial derivative:
$$
F(t,g,h) = \frac{\partial}{\partial t} f(t,g,h)
$$
The definition of a partial derivative is that you differentiate with respect to one variable as usual while keeping all others fixed. The different kind of $d$ is there to remind you of that, because in this case writing $\frac{d}{dt}f(t,g,h)$ would imply that $g$ and $h$ have some kind of $t$-dependence. |
H: Why does the (topology given by) Hausdorff metric depend only on the topology?
If I have a compact metric space $(X,d)$, I can define the Hausdorff metric on the set $K(X)$ of all non-empty compact (equivalently, closed) subsets of $X$ as $$d_H(A,B) = \max ( \sup_{x \in A} \inf_{y \in B} d(x,y), \sup_{y \in A} \inf_{x \in B} d(x,y))$$
Now, I'm told, "the topology on $K(X)$ depends only on the topology of $X$, as any two metrics on $X$ are equivalent". Should I interpret the any two metrics are equivalent as "they generate the same topology"? Then, the implication of the Hausdorff metrics being equivalent is not clear at all.
On the other hand, if I interpret two metrics being equivalent as $c d_1<d_2 < Cd_1$, then the Hausdorff metrics producing the same topology seems clear, but the fact that any two metrics on $X$ are equivalent is suspicious. Does the compactness of $X$ force this?
Even if I know one of the interpretation to be correct, I can then try and prove the statement. In fact, I would prefer that, over a complete proof of the statement.
As an aside, is there a standard notation for my $K(X)$? The book I'm reading uses $2^X$, although I suspect due to typographical limitations.
AI: Your first interpretation is the correct one.
The topology on $K(X)$ induced by the Hausdorff metric has a description as a hit-and-miss topology. That is, for a nonempty open set $U \subseteq X$ put
$$
U^+ = \{C \in K(X) \mid C \cap U \neq \emptyset\}
$$
and
$$
U^- = \{C \in K(X) \mid C \subseteq U\}.
$$
The set $U^+$ contains the compact sets that meet $U$ and the set $U^-$ contains the compact sets that miss $X \setminus U$.
The sets $U^+$ and $U^-$ where $U$ runs through the non-empty open sets of $X$ form a subbasis for the so-called Vietoris topology on $K(X)$. The sets $[U; V_1,\dots,V_n] := U^- \cap V_{1}^+ \cap \cdots \cap V_{n}^+$ form a convenient basis for the Vietoris topology.
The point is that one can prove that the Hausdorff metric induces the Vietoris topology on $K(X)$.
It may be helpful to prove at some point that for a countable dense subset $D$ of $X$ the collection of non-empty finite subsets of $D$ forms a countable dense subset of $K(X)$.
A nice and detailed exposition of these ideas can be found e.g. in Srivastava, A course on Borel sets, section 2.6, see Spaces of Compact sets, pages 66ff.
Added: The second interpretation is too strong. It is not very hard to show that a fat Cantor set and the usual ternary Cantor set are homeomorphic, but not bi-Lipschitz homeomorphic. |
H: Properties of determinants.
Is this property of a determinant true?
$$|A^3| = |A|^3.$$
I haven't studied about this but while working out on a sum, wondered if this could be true, I'll check out on other sums too if this works.
AI: Hint: use $\det(AB)=\det(A)\det(B)$. |
H: Extending a function beyond the completion/closure of its domain
In analysis there are certain theorems that tell under which conditions you can continuously extend a continuous functions to the closure/completion of its domain (which actually give the same set, since when talking about completions we have to be (at least) in a metric space).
The theorems I have in mind are these three:
Every uniformly continuous mapping between a metric space and a complete metric can be extended uniformly continuous to a mapping between the completion of the domain and the range.
Every continuous function on a dense subset can be continuously extend to the closure of that subspace (which is by definition of "dense", the whole space).
The theorem from here.
What I'm looking for is a collection of theorems that tell me under which circumstances one can extend a continuous function beyond the completion or the closure of it's domain.
The only theorem of this kind that I know of is the famous Tietze-Urysohn extension theorem (which applies to function whose domain is already a closed set).
I'd be also happy with a reference, as long as the theorems listed there are quickly accessible (i.e. don't require reading through a thicket of abstract/very specialized definitions before getting to the theorems - the more "concrete" the theorems are (as in $\mathbb{R}^n$, or a metric space), the better.
AI: Extensions problems come in two basic shapes:
unique extension from a non-closed set to its closure
non-unique extension from a closed set to a larger set
So it makes sense that many extension theorems fall into one or the other class. Sometimes you just have to use two of them.
But extension theorems for Lipschitz functions, from the simple McShane-Whitney theorem to the more sophisticated Kirszbraun's theorem and its generalizations, do not care whether the domain is closed or not.
The recent two-volume book Methods of geometric analysis in extension and trace problems by Brudnyi and Brudnyi looks like the reference you want. It has separate chapters for results for $\mathbb R^n$ and for general metric spaces. |
H: Does $X\times Y$ have countable chain condition?
Let $X$ and $Y$ have countable chain condition. Does $X\times Y$ have countable chain condition?
Thanks for your help.
AI: Consistently not necessarily. A consistent counterexample, under, for example, the assumption $\mathbf{V} = \mathbf{L}$ or just $\diamondsuit$, would be a Souslin line, which has the ccc, but whose square does not.
However, also consistently yes. In particular, Martin's Axiom, $\sf{MA}$ (or even the fragment $\sf{MA} (\aleph_1)$), implies that any product of ccc spaces is ccc. |
H: How to evaluate the following limit?
How to evaluate this limit?
$$\underset{n\to \infty }{\mathop{\lim }}\,\left( {{n}^{-2}}\sum\limits_{i=1}^{n}{\sum\limits_{j=1}^{{{n}^{2}}}{\frac{1}{\sqrt{{{n}^{2}}+ni+j}}}} \right)$$
Thanks!
AI: The idea here is to rearrange in terms of a Riemann sum that leads to a double integral. Here, a little manipulation produces
$$\frac{1}{n} \sum_{i=1}^n \frac{1}{n^2} \sum_{j=1}^{n^2} \frac{1}{\sqrt{1+\frac{i}{n}+\frac{j}{n^2}}}$$
Then as $n \to \infty$, the double sum becomes
$$\begin{align}\int_0^1 dx \, \int_0^1 dy \frac{1}{\sqrt{1+x+y}} &= 2 \int_0^1 dx \left [ \sqrt{2+x}-\sqrt{1+x}\right ]\\ &= \frac{4}{3} \left [\left (3^{3/2}-2^{3/2} \right ) - \left (2^{3/2}-1 \right )\right ]\\&=4 \sqrt{3} - \frac{16}{3} \sqrt{2}+1\end{align}$$ |
H: Show $x+\frac{\lambda}x \geq 2\sqrt{\lambda}$ all $x,\lambda>0$
For $\lambda>0 $ and $x > 0$,
$$x+\frac{\lambda}x \geq 2\sqrt{\lambda}$$
I tried to let function $g(x) =$ the difference of them and then find $g'(x) = 0$. With the given $x$, I can get the min point in $g(x)$ and find out $g(x)$ is greater than zero and finally rearrange it.
Seems not work though..
AI: Bringing $2\sqrt{\lambda}$ to the left we get
$$x-2\sqrt{\lambda}+\frac{\lambda}{x}=(\sqrt{x}-\sqrt{\frac{\lambda}{x}})^2\ge 0$$
So the claim is true. |
H: Tails of Fourier Transformed family of functions
I am reading a thesis where on page 39, Definition 4, $\epsilon$-oscillatory is defined as a property for a family of functions $\{f_{\epsilon}\}_{0<\epsilon<1}$ in $L^2(\mathbb{R}^d)$ to have if $$\lim_{R\to\infty}\limsup_{\epsilon\to 0}\int_{\{\left|\xi\right|>R\}}\frac{1}{\epsilon^d}\left|\widehat{f_{\epsilon}}\left(\frac{\xi}{\epsilon}\right)\right|^2d\xi=0.$$
It is mentioned soon after that for this property to hold it suffices that the weak derivative $\{\epsilon\nabla f_{\epsilon}\}$ is bounded in $L^2$. I cannot connect the two. The most I could do was the following
$$\sup_{\epsilon}||\epsilon\nabla f_{\epsilon}||_{L^2}<C \implies \sup_{\epsilon}||\epsilon\widehat{\nabla f_{\epsilon}}||_{L^2}<C\implies\sup_{\epsilon}||\epsilon\xi \widehat{f_{\epsilon}}||_{L^2}<C$$
$$\implies \sup_{\epsilon}\int_{\mathbb{R}^d}\frac{1}{\epsilon^d}\left|p\widehat{f_{\epsilon}}\left(\frac{p}{\epsilon}\right)\right|^2dp<C$$ after a change of variables $\epsilon\xi=p$. Could someone please help me in figuring out the rest?
AI: In the first displayed equation, the substitution $t=\frac{\xi}\varepsilon$ reduces us to show that
$$\lim_{R\to\infty}\limsup_{\varepsilon\to 0}\int_{\{|t|>\frac R\varepsilon\}}|\widehat{f_\varepsilon}(t)|^2dt=0.$$
Rewriting
$$\int_{\{|t|>\frac R\varepsilon\}}|\widehat{f_\varepsilon}(t)|^2dt=\int_{\{|t|>\frac R\varepsilon\}}|t|^2|\widehat{f_\varepsilon}(t)|^2\frac 1{|t|^2}dt,$$
and and noticing that over the set of integration, $\frac 1{|t|^2}\leqslant\frac{\varepsilon^2}{R^2}$, we get
$$\int_{\{|t|>\frac R\varepsilon\}}|\widehat{f_\varepsilon}(t)|^2dt\leqslant \frac{\varepsilon^2}{R^2}\int_{\Bbb R^d}|t|^2|\widehat{f_\varepsilon}(t)|^2dt\leqslant\frac 1{R^2}\sup_{\varepsilon}\lVert\varepsilon \xi\widehat f_{\varepsilon}\rVert^2.$$ |
H: Unicity of solution of pde
Let the pde $$\dfrac{\partial^2 u}{\partial t^2} - \dfrac{\partial^2 u}{\partial x^2}=f(x)$$
The question is:
Find the limit condition such that this pde admit a unique solution in $[a,b] \times [0,T].$
For this, I suppose the existence of to sulutions $u_1$ and $u_2$ and we put $v = u_1 - u_2.$ Then, $$\dfrac{\partial^2 v}{\partial t^2} - \dfrac{\partial^2 v}{\partial x^2} = 0$$
myltiply this last equation by $\dfrac{\partial v}{\partial t}$ and we integrate over $[a,b]$ and we obtained $$\displaystyle\int_a^b \dfrac{\partial^2 v}{\partial t^2} . \dfrac{\partial v}{\partial t} dx - \int_a^b\dfrac{\partial^2 v} {\partial t^2}. \dfrac{\partial^2 v}{\partial x^2} dx = 0$$
I tried integration by parts, but didn't find the limit conditions to obtain the unicity of this pde. Thanks for any help.
AI: For the wave equation there is a conserved energy
$$
E(t)=\int_{-\infty}^\infty (u_t)^2+(u_x)^2 dy.
$$
This is obtained multiplying by $u_t$ and integrating by parts (as you did). For $v$ this quantity is zero initially, thus it vanishes for all times. Right? |
H: How to calculate module $a\cdot7 \equiv1\pmod8$?
How should i calculate following module ?
$a\cdot7 \equiv1\pmod8$
is value of $a = 1$ ?
Thanks.
AI: For every natural number $n$ we have $(-1)*(n-1) \equiv (-1)^2 = 1 \bmod n$. So in your example $a \equiv -1 \equiv 7 \bmod 8$. |
H: Is $\gcd(a,bc)=\gcd(a,b)\gcd({a\over\gcd(a,b)}, c)$?
Is it true that $\gcd(a,bc)=\gcd(a,b)\gcd({a\over\gcd(a,b)}, c)$?
It is true in quite a few examples that I came up with, e.g.
$a = 18, b = 21, c = 33$
$\gcd(18,21)\gcd({18\over\gcd(18,21)}, 33) = 3 \gcd({18 \over 3}, 33) = 3 \times 3 = 9$
$\gcd(18,693) = 9$
Here is my (probably not very good) attempt at a proof:
For a $b$ where $\gcd(a, b) = 1$
For a $c$ where $\gcd(a, c) = 1$
$\gcd(k_1a, k_1b) = k_1$
$\gcd(k_2a, k_2c) = k_2$
$\gcd(k_1k_2a, k_1k_2bc) = k_1k_2$
Hence, $\gcd(k_1k_2a, k_1b) \gcd({k_1k_2a \over \gcd(k_1k_2a, k_1b)}, k_2b) = k_1 \gcd({k_1k_2a \over k_1}, k_2c) = k_1k_2 = \gcd(k_1k_2a, k_1k_2bc)$
Is this statement true, or did I just stumble across several lucky coincidences?
AI: It is generally true (for positive arguments) that $\gcd(dx,dy)=d\gcd(x,y)$. Now $d=\gcd(a,b)$ divides both $a$ and $b$, so you get from this that
$$
\gcd(a,bc)=d\gcd(a/d,(b/d)c).
$$
Moreover $a/d$ and $b/d$ are relatively prime (a common divisor, multiplied by $d$, would divide both $a$ and $b$, and this can only happen for $1$). So what you are asking boils down to: if $a'$ and $b'$ are relatively prime, does it hold for all $c$ that $\gcd(a',b'c)=\gcd(a',c)$? The answer is yes, this is a well known fact attributed to Gauss. |
H: Golden ratio rectangles
I'm designing a layout and I would like to use four golden ratio rectangles. The total width of the layout is 960px. How do I find the height (x)? Below is a diagram of the layout.
AI: Finding the required height amounts to expressing $a$ and $c$ in terms of the height $h = a+b$. I will do this below; denote with $\phi$ the golden ratio.
We have that $\dfrac h a =\phi$, i.e. $a = \dfrac h \phi$. Now $d = \dfrac h2$, and $\dfrac c d = \phi$.
Using some trivial algebra, we obtain:
$$c = \phi d = \frac\phi2 h$$
Thus we have reduced to solving the equation:
$$960 = 2a + c = \left(\frac2\phi+\frac\phi 2\right)h$$
In conclusion, $h = \dfrac{960}{\frac2\phi+\frac\phi 2} \approx 469.42$. |
H: Product of two compact topological spaces is compact.
Proof:
I am following this. But, I feel, I am missing something.
Consider two compact spaces $X_1$ and $X_2$ and some cover $U$ of their product space. Consider an element $x\in X_1$. The sets $A_{x,y}$ within $U$ contain $(x,y)$ for each $y\in X_2$.
Now, define $\pi_2:A_{(x,y)}$ which forms an open cover for $X_2$ and has finite subcover $\{A_{(x,y_i)}\}$.
It's clear up to this point.
Let $\{B_x\}$ be the intersection of $\{\pi_1:(A_y)\}$ within $A_{y_i}$, which is open. Thus, $\{B_x\}$ forms an open cover, which has a finite subcover, $\{B_{x_i}\}$. The corresponding sets $\{A_{x_i,y_i}\}$ is finite, and forms an open subcover of the set.
First, we know the covers cover $X_1$ and $X_2$ respectively. But, how do we know that they cover $X_1*X_2$? Second, I believe $\pi_1$ is not necessarily non-empty. So how can we be sure that it is open?
I would appreciate any comments or other opinions.
P.S. The notations may be, like in original source, somewhat ambigious too.
AI: Each $x\in X$ is element of some $B_{x_j}=\bigcap_{i=1}^{n_{x_j}}\pi_X(A_{x_j,y^{x_j}_i})$. It suffices to show that $\{x\}\times Y\subseteq\bigcup_{i=1}^{n_{x_j}}A_{x_j,y^{x_j}_i}$. (I put some super-index on the $y_i$'s because they depend on $x_j$.) At this point it is crucial that the $A\in S$ are products of open sets in $X$ and $Y$, and not just open sets in $X\times Y$. Then, since $x\in\bigcap_{i=1}^{n_{x_j}}\pi_X(A_{x_j,y^{x_j}_i})$ and for each $y\in Y$ there is a $y^{x_j}_k$ ($1\le k\le n_{x_j}$) such that $y\in\pi_Y(A_{x_j,y^{x_j}_k})$, you can derive that $(x,y)\in A_{x_j,y^{x_j}_k}$. |
H: Is the adjoint operation WOT-WOT continuous?
This is a well-known property of the Hilbert-space adjoint operator that it is WOT continuous. Is a similar true for Banach spaces? That is, for a given Banach space $X$ is the operation
$\varphi\colon B(X)\to B(X^*)$, $\varphi(T)=T^*$
continuous with respect to WOT-topologies of $B(X)$ and $B(X^*)$? I guess it will be the case for $X$ reflexive.
AI: No. (Of course it does work for reflexive spaces, as noted by Martin.)
Example.
$X = c_0$, $X^* = l_1$, $X^{**} = l_\infty$.
$T_n : c_0 \to c_0$ is the left-shift by $n$, so that
$$
(a_1,a_2,\dots) \mapsto (a_{n+1},a_{n+2},\dots)
$$
where we deleted the first $n$ coordinates.
Compute $T_n^* : l_1 \to l_1$. It is the right-shift by $n$, so that
$$
(a_1,a_2,a_3,\dots) \mapsto (0,0,\dots,0,a_1,a_2,\dots)
$$
where we inserted $n$ extra zeros at the beginning.
Then: $T_n \to O$ in the WOT for $B(c_0)$, where $O$ is the zero operator. If $x=(x_1,x_2,\dots) \in c_0$ and $y = (y_1,y_2,\dots) \in l_1$, then
$$
\left|\langle T_nx , y\rangle\right| = \left|\sum_{k=1}^\infty x_{k+n} y_k\right| \le \|y\|_1 \left(\max_{j \ge n+1} |x_j|\right) \to 0 .
$$
But $T_n^*$ does not converge to $O$ in WOT for $B(l_1)$. Indeed, take $y \in l_1$ with positive coefficients, and $z = (1,1,\cdots) \in l_\infty$. Then
$$
\langle z,T_n^* y\rangle = \sum_{k=n+1}^\infty y_{k-n} z_k = \|y\|_1
$$
for all $n$ and does not go to zero. |
H: Approximating continuous functions $S^n \to S^n$
I'm trying to check that every continuous function $f:S^n \to S^n$ can be approximated by differentiable ones. Well, by Stone-Weierstrass I can approximate the coordinate functions $f_i:S^n \to \Bbb R$ by differentiable ${\tilde f_i}:S^n \to \Bbb R$. The problem is, of course, I can't guarantee ${\tilde f}:=({\tilde f}_1,\dots,{\tilde f}_{n+1})$ to be in $S^n$. To solve this i thought of projecting ${\tilde f}(p)$ to the point in the sphere that minimizes the distance ${\tilde f}(p)$ to $S^n$, this projection would still approximate $f$ but I see no reason for this projection to be differentiable. Any hint?
Sorry, by approximate I mean $\sup_{x \in S^n} |{\tilde f}(x) - f(x)| < 1$ (I don't know if $1$ is important here)
AI: You're fine, as long as your approximation satisfies $|f-g|<1$. The function $\pi\colon \mathbb R^{n+1}-\{0\}\to S^n$, $\pi(x)=x/|x|$, is smooth. |
H: Is the composite of an uniformly continuous sequence of functions with a bounded continuous function again uniformly continuous?
Let $\{f_n\}$ be a sequence of functions $f_n: J\to \mathbb{R}$ that converges uniformly to $f:J\to \mathbb{R}$ where $J\subseteq \mathbb{R}$ is an interval.
It is clear that for a uniformly continuous function $g:\mathbb{R}\to\mathbb{R}$, the sequence $\{g\circ f_n\}$ converges uniformly to $g\circ f:J\to \mathbb{R}$. There is a counterexample, if $g$ is only continuous.
If $J$ is compact, there is no such counterexample because then every continuous function $g$ is uniformly continuous. If $J$ is not compact, bounded and continuous for $g$ does not imply uniformly continuous.
Let $g:\mathbb{R}\to\mathbb{R}$ be bounded and continuous and $\{f_n\}$ a sequence of functions $f_n: J\to \mathbb{R}$ that converges uniformly to $f:J\to \mathbb{R}$. Does the sequence $\{g\circ f_n\}$ converges uniformly to $g\circ f:J\to \mathbb{R}$? If not, what is a counterexample?
AI: Hint. $g(x) = \sin(x^2)$, $f(x)=x$, and find constants $a_n \to 0$ so that $f_n(x) = x+a_n$ does what you want. So that for each $n$ there is $x$ with $g(x)=\sin(2\pi n) = 0$ and $g(f_n(x))=\sin(2\pi(n+1/2)) = 1$. |
H: we need to know at $z=0$ ,$f(z)={e^z+1\over e^z-1}$ has a
we need to know at $z=0$ ,$f(z)={e^z+1\over e^z-1}$ has a
removable singularity
a pole
essential singularity
residue is $2$
for removable singularity we need $\lim_{z\to 0}zf(z)=0$ but here this is not the case so $1$ is false, clearly $2$ is true, also $3$ is false, I have calculated that $\lim_{z\to 0}zf(z)=2$ so residue is $2$ at simple pole $z=0$ am I right in all cases?
AI: Yes, you are right.
Yes, you are right.
Yes, you are right.
Yes, the residue is 2.
I don't have too much to add unfortunately. |
H: Show $\frac{x_1}{x_n} + \frac{x_2}{x_{n-1}} + \frac{x_3}{x_{n-2}} + \dots + \frac{x_n}{x_1} \geq n$
I was recently asked this question which stumped me.
How can you show $\dfrac{x_1}{x_n} + \dfrac{x_2}{x_{n-1}} + \dfrac{x_3}{x_{n-2}} + \dots + \dfrac{x_n}{x_1} \geq n$ for any positive reals $x_1, x_2, \dots, x_n$?
AI: By the AM-GM inequality, we have
$$\frac{x_1/x_n+x_2/x_{n-1}+\cdots+x_n/x_1}{n}\geq\sqrt[n]{\frac{x_1}{x_n}\frac{x_2}{x_{n-1}}\cdots\frac{x_n}{x_1}}=\sqrt[n]{1}=1.$$ |
H: Why isn't the derivative of $e^x$ equal to $xe^{(x-1)}$?
When we take a derivative of a function where the power rule applies, e.g. $x^3$, we multiply the function by the exponent and subtract the current exponent by one, receiving $3x^2$. Using this method, why is it that the derivative for $e^x$ equal to itself as opposed to $xe^{x-1}$? I understand how the actual derivative is derived (through natural logs), but why can't we use the original method to differentiate? Furthermore, why does the power rule even work? Thank you all in advance for all of your help.
AI: You are confusing things. If I define $f : \mathbb{R} \to \mathbb{R}$ by $f(x)=x^n$ this is very different from defining $g: \mathbb{R} \to \mathbb{R}$ by $g(x)= a^x$, note that in the first one the exponent is not varying and on the other function the variable appears on the exponent.
For the first function the derivative is just $f'(x) = nx^{n-1}$, for the second one things are different. First it turns out that first you need to define what it means to raise something to a real number (notice that the usual definition doesn't work, what would mean multiplying a number by itself $\pi$ times?), in that case for reasons that I won't explain here we define this function as:
$$a^x = e^{x\ln a}$$
In that case, if we know how to differentiate $e^x$ (and usually when we construct this, we already know), we'll have the following:
$$(\ln \circ g)(x)=x \ln a$$
Now the chain rule gives:
$$\ln'(g(x))g'(x)=\ln a$$
However $\ln'(x) = 1/x$ because of the construction of $\ln$ and $g(x)=a^x$ so tha we have:
$$\frac{1}{a^x}g'(x)=\ln a \Longrightarrow g'(x) = a^x \ln a$$
Notice that there was a crucial appeal to the definition $a^x = e^{x \ln a}$. To know why we define things this way look at Spivak's Calculus, there's an entire chapter devoted to all the constructions about logs and exponentials. |
H: Uniform integrability of RV's
$\mathbf{Theorem}$: Let $Y \in \mathbb{L}_1$, then the RV $(\mathbb{E}[Y \mid \mathcal{F}], \mathcal{F} \subset \mathcal{A} \space \sigma\text{-algebra})$ are uniformly integrable.
$\mathbf{Proof}$: Choose $K \in (0,\infty)$:
\begin{align}
\mathbb{E}[\mathbb{E}[Y \mid \mathcal{F}] \mathbf{1}_{|\mathbb{E}[Y \mid \mathcal{F}]| \ge K}] &\le \mathbb{E}[\mathbb{E}[Y \mid \mathcal{F}] \mathbf{1}_{\mathbb{E}[|Y| \mid \mathcal{F}] \ge K}] \\
&=\mathbb{E}[Y \mathbf{1}_{\mathbb{E}[|Y| \mid \mathcal{F}] \ge K}]
\end{align}
Next, we know, that
\begin{align}
\mathbb{P}(\mathbb{E}[Y \mid \mathcal{F}] \ge K) &=\int_\Omega \mathbf{1}_{\mathbb{E}[Y \mid \mathcal{F}] \ge K}] \, d \mathbb{P}\\
&\le \frac{\mathbb{E}[\mathbb{E}[Y \mid \mathcal{F}]]}{K} = \frac{\mathbb{E}[|Y|]}{K}
\end{align}
($\mathbf{Q}$: How do we know that? Is it an application of the Martingale inequality - generalized Kolmogorov for $\psi(K) =K$ and $U = Y$ ?) My guess is no - then by what property do we obtain the above inequality?)
Hence,
$$\sup\left\{ \mathbb{E}[\mathbb{E}[Y \mid \mathcal{F}] \mathbf{1}_{|\mathbb{E}[Y \mid \mathcal{F}]| \ge K}] \right\} \le \sup \left\{\mathbb{E}[|Y| \mathbf{1}_A ]: \mathbb{P}(A) \le \frac{\mathbb{E}[|Y|]}{K}\right\}$$
And uniform integrability follows from uniform continuity of expectation.
AI: I think you're missing some absolute values throughout your post. The inequality is just Markov's inequality applied to the non-negative variable ${\rm E}[|Y|\mid\mathcal{F}]$. For completeness: If $X$ is any non-negative random variable and $K>0$, then $$X\geq K \iff \frac{X}{K}\geq 1$$ and hence
$$
1_{\{X\geq K\}}\leq \frac{X}{K}1_{\{X\geq K\}}\leq \frac{X}{K}.
$$
Apply this to $X={\rm E}[|Y|\mid\mathcal{F}]$ and integrate both sides. |
H: Evaluating $\int_{0}^{1} \frac{\ln^{n} x}{(1-x)^{m}} \, \mathrm dx$
On another site, someone asked about proving that
$$ \int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx = (-1)^{n+m-1} \frac{n!}{(m-1)!} \sum_{j=1}^{m-1} (-1)^{j} s (m-1,j) \zeta(n+1-j), \tag{1} $$
where $n, m \in \mathbb{N}$, $n \ge m$, $m \ge 2$, and $s(m-1,j)$ are the Stirling numbers of the first kind.
My attempt at proving $(1)$:
$$ \begin{align}\int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx &= \frac{1}{(m-1)!} \int_{0}^{1} \ln^{n} x \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \ x^{k-m+1} \ dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \int_{0}^{1} x^{k-m+1} \ln^{n} x \, dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \frac{(-1)^{n} n!}{(k-m+2)^{n+1}}\\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} \sum_{j=0}^{m-1} s(m-1,j) \ k^{j} \ \frac{(-1)^{n} n!}{(k-m+2)^{n+1}} \\ &= (-1)^{n} \frac{n!}{(m-1)!} \sum_{j=0}^{m-1} s(m-1,j) \sum_{k=m-1}^{\infty} \frac{k^{j}}{(k-m+2)^{n+1}} \end{align} $$
But I don't quite see how the last line is equivalent to the right side of $(1)$. (Wolfram Alpha does say they are equivalent for at least a few particular values of $m$ and $n$.)
AI: Substitute $x=e^{-t}$ and get that the integral is equal to
$$(-1)^n \int_0^{\infty} dt \, e^{-t} \frac{t^n}{(1-e^{-t})^m} $$
Now use the expansion
$$(1-y)^{-m} = \sum_{k=0}^{\infty} \binom{m+k-1}{k} y^k$$
and reverse the order of summation and integration to get
$$\sum_{k=0}^{\infty} \binom{m+k-1}{k} \int_0^{\infty} dt \, t^n \, e^{-(k+1) t}$$
I then get as the value of the integral:
$$\int_0^1 dx \, \frac{\ln^n{x}}{(1-x)^m} = (-1)^n\, n!\, \sum_{k=0}^{\infty} \binom{m+k-1}{k} \frac{1}{(k+1)^{n+1}}$$
Note that when $m=0$, the sum reduces to $1$; every term in the sum save that at $k=0$ is zero.
Note also that this sum gives you the ability to express the integral in terms of a Riemann zeta function for various values of $m$, which will provide the Stirling coefficients. |
H: Localization and a particular exact sequence
Look at this proposition:
Let $R$ be a commutative ring with unity, and let $f_1,\ldots,f_n\in R$
generate the unit ideal in $R$. Then the following sequence is exact:
$$0\longrightarrow R\xrightarrow{\alpha} \bigoplus_{i=1}^n
R_{f_i}\xrightarrow{\beta} \bigoplus_{i,j=1}^n R_{f_if_j}$$
where $\alpha(x)=(\frac{x}{1},\ldots,\frac{x_1}{1})$ and
$\beta(\frac{x_1}{x_n},\ldots\frac{x_n}{f_n})=\left(\frac{x_i}{f_i}-\frac{x_j}{f_j}\;\textrm{in
$R_{f_if_j}$}\right)$
Now, from this, should follow that for a fixed $f\in R$, also the following sequence should be exact (in this case the functions are the obvious ones arising from universal properties of localizations and moreover we suppose that $f_1\ldots,f_n$ generate the unit ideal in $R_f$ ):
$$0\longrightarrow R_f\xrightarrow{\alpha} \bigoplus_{i=1}^n
R_{ff_i}\xrightarrow{\beta} \bigoplus_{i,j=1}^n R_{ff_if_j}$$
but i don't understand why.
In general ${(R_f)}_{g}\ncong R_{fg}$ so why is the second sequence exact?
AI: Please double check my work, but I believe that isomorphism actually does hold.
Here is why I think so. The following appears in Atiyah-MacDonald (Introduction to Commutative Algebra, p. 43, exercise 3):
Let $R$ be a ring (commutative with identity), let $S$ and $T$ be two multiplicatively closed subsets of $R$, and let $U$ be the image of $T$ in $S^{-1}R$. Show that the rings $(ST)^{-1}R$ and $U^{-1}(S^{-1}R)$ are isomorphic.
This is proved easily by showing $U^{-1} (S^{-1}R)$ satisfies the universal property of localization of $R$ at $ST$.
Now, if we take $S=\{1,f,f^2,\ldots\}$ and $T=\{1,g,g^2,\ldots\}$, then we conclude $(R_f)_{g/1} \cong R_{fg}$. One might be a little worried that $R_{fg}$ and $(ST)^{-1}R$ actually are the same ring; but...
EDIT: One can show that $(ST)^{-1}R$ satisfies the universal property of localization of $R$ at powers of $fg$.
MORE EXPLANATION: From the exercise we know $(R_f)_{g/1}$ and $(ST)^{-1}R$ are isomorphic. We only want to show $(ST)^{-1}R$ and $R_{fg}$ are the same ring. A-M give a nice criteria for showing a ring is a localization based on the universal property of localizations (corollary 3.2):
Let $S$ be a multiplicatively closed subset of a ring $R$, and suppose $\phi: R \to S^{-1}R$ is the canonical map. If $\psi: R \to Q$ is a ring homomorphism such that i) $s\in S \Rightarrow \psi(s)$ is a unit in $Q$; ii) $\psi(r)=0 \Rightarrow sr=0$ for some $s\in S$; iii) every element of $Q$ is of the form $\psi(r)\psi(s)^{-1}$; then there is a unique isomorphism $h:S^{-1}R \to Q$ such that $\psi = h\phi$.
Usually when I'm trying to show a ring is a localization, I don't build a homomorphism and then try to construct its inverse, I use this proposition.
Now, take $\phi: R \to R_{fg}$ and $\psi: R \to (ST)^{-1}R$. We verify i)-iii):
i) $\psi$ clearly maps any $(fg)^n$ to a unit in $(ST)^{-1}R$.
ii) If $\psi(r) = r/1=0$ in $(ST)^{-1}R$, then we have $f^ng^m r=0$ for some nonnegative integers $n,m$. If $n=m$, then we're done. If not, assume $m<n$ and mulitply both sides of this equality by $g^{n-m}$ (and similarly if $m>n$).
iii) Take $r/f^ng^m \in (ST)^{-1}R$. Then
$$\frac{r}{f^ng^m} = \frac{rf^mg^n}{1} \cdot \frac{1}{(fg)^n(fg)^m} = \psi(rf^mg^n) \psi((fg)^n(fg)^m)^{-1}.$$
Now invoke the above mentioned universal property. |
H: Gram determinant
How to prove that
$$\sqrt{\Gamma(\vec{a},\vec{b},\vec{c})}=|(\vec{a},\vec{b},\vec{c})|,$$
where $\Gamma(\vec{a},\vec{b},\vec{c})= \left | \begin{array} {ccc} \vec{a} \cdot \vec{a} & \vec{b} \cdot \vec{a} & \vec{c} \cdot \vec{a} \\
\vec{a} \cdot \vec{b} & \vec{b} \cdot \vec{b} & \vec{c} \cdot \vec{b} \\
\vec{a} \cdot \vec{c} & \vec{b} \cdot \vec{c} & \vec{c} \cdot \vec{c}
\end{array} \right | $.
Thanks!
AI: Let the matrix $X$ have rows $\vec{a},\vec{b},\vec c$.
Then $XX^T$ is the matrix within the vertical bars in your definition of the gram determinant.
Then you have $\Gamma(\vec{a},\vec{b},\vec{c})=\det(XX^T)=\det(X)\det(X^T)=\dots$
Can you take it from here?
If in fact you do want to work with complex vectors, then I need to alter the above slightly. To be consistent with the order of terms in the dot products, you would have ot use column vectors instead of row vectors, and the Hermitian conjugate instead of the transpose. You'd also need the fact that the determinant of $X^H$ is the complex conjugate of the determinant of $X$, and multiplying them would give you the square modulus of the determinant of $X$. |
H: Estimating $\hat{p}$
let $X\sim Bin(n,p)$ and $\hat{p} =\frac{X}{n}$
a) Find a constant c such that $E[c\hat{p}(1-\hat{p})]=p(1-p)$
My work:
$$
\begin{align}
cE[\hat{p}(1-\hat{p})] &=E[\frac{X}{n}]-E[\frac{X^2}{n^2}]\\
&= \frac{1}{n}E[X]-\frac{1}{n^2}E[X^2] \\
\end{align}
$$
And I continue with $E[X] = p$ and $E[X^2] = Var(X)+E[X]^2$ but this gives me
$$
= \frac{p}{n} - \frac{1}{n^2}np(1-p)-\frac{p^2}{n^4}
$$
Which doesnt seem righ. Am I doing something wrong?
AI: Since $X$ is a binomial random variable, $E[X]=np$ and $Var(X)=np(1-p)$.
Hence,
$$\frac{1}{n}E[X]-\frac{1}{n^2}E[X^{2}]=\frac{np}{n}-\frac{np(1-p)+n^{2}p^{2}}{n^2}$$
Simplifying,
$$=p-p^{2}-\frac{1}{n}p(1-p)=\left(1-\frac{1}{n}\right)p(1-p)$$
So we simply let $c=\frac{1}{1-\frac{1}{n}}=\frac{n}{n-1}$ |
H: $x^p-c$ has no root in a field $F$ if and only if $x^p-c$ is irreducible?
Hungerford's book of algebra has exercise $6$ chapter $3$ section $6$ [Probably impossible with the tools at hand.]:
Let $p \in \mathbb{Z}$ be a prime; let $F$ be a field and let $c \in
F$. Then $x^p - c$ is irreducible in $F[x]$ if and only if $x^p - c$
has no root in $F$. [Hint: consider two cases: $\mathrm{char}(F) = p$ and $\mathrm{char}(F) \ne p$.]
I have attempted this a lot. Anyone has an answer?
AI: Perhaps the simplest tool I can think of is the following:
Let $F$ be a field and $f(x)$ an irreducible polynomial over $F$, then there is a field $K\geq F$ where $f(x)$ has a root; as $f(x)$ is irreducible in $F[x]$, a principal ideal domain, then $\langle f(x)\rangle$ is a maximal ideal of $F[x]$, hence $K=F[x]/\langle f(x)\rangle$ is a field, $\bar{x}$ is a root of $f(x)$, and it is easy to see how to embed $F$ into $K$. Now given a polynomial $f(x)\in F[x]$ it is clear how to construct a field $K\geq F$ such that $f(x)$ factors into linear polynomials in $K[x]$.
Now your question can be answered as follows:
Let $K\geq F$ be a field where $x^p-c$ factors into linear polynomials, say $x^p-c=(x-z_1)\cdots(x-z_p)$. Suppose $x^p-c$ is not irreducible in $F[x]$, then there are polynomials $f(x),g(x)\in F[x]$ of degree $\geq 1$ such that $x^p-c=f(x)g(x)$, then we may assume $f(x)=(x-z_1)\cdots(x-z_n)$, where $\deg f(x)=n<p$.
Put $z=z_1\cdots z_n$, then $z$ is $\pm$ the constant term of $f(x)$, so $z\in F$, and $z^p=(z_1\cdots z_n)^p=z_1^p\cdots z_n^p=c^n$. As $p$ is prime there are integers $a,b$ such that $1=ap+bn,$ then $$(c^az^b)^p=c^{ap}z^{bp}=c^{ap}c^{bn}=c,$$
but $c^az^b\in F$, so $x^p-c$ has a root in $F$. |
H: What is the distribution of primes modulo $n$?
Let $n\geq 2$ and let $k$ be "considerably larger" than $n$ (like some large multiple of $n$). Then for each $i$ such that $0<i<n$ and $\gcd(i,n)=1$ let's define
$$c_i=\left|\{p_j\;|\; p_j\equiv i \mod n,\;\mbox{where $p_j$ is the $j$-th prime, $1\leq j\leq k$}\}\right|$$
so $c_i$ represents how many of the first $k$ primes are congruent to $i$ modulo $n$.
What can we say about $c_i$s, that is about the distribution of the first $k$ primes modulo $n$?
I thought the distribution will be seemingly random, and that is mostly true - for example $c_i$s are always very close together. But there are observable non-random patterns. For example for $n=3$, for various $k$s I've tried (up to $10^6$) I always got $c_2>c_1$. If this were just some kind of random discrepancy due to distribution of small primes, it would eventually vanish for large $k$s, which I don't observe.
AI: Dirichlet's theorem on primes in arithmetic progression tells us that the proportion of primes will be the same, for values of $i$ that are coprime to $n$ (and 0 otherwise).
However, Chebyshev's bias tells us that numerically there are more primes with a quadratic non-residue, than a residue, when you're counting primes up to $N$. |
H: Geometric Tangent Vectors - looking for and understanding of and what the point is.
The problem that I am having is that I am having quite a hard time understanding the ideas of Geometric Tangent vectors and why they are even needed - I mean one already has the usual "Tangent" of a curve, why more definitions?
We let X be an n-dimensional differentiable manifold $p \in X$, and $(U,h,V)$ is a chart for X. The set of all differentiable curves $\gamma_{1},\gamma_{2}$ through p is denoted by $K_{p}$ and $\gamma: (-\epsilon,\epsilon) \rightarrow X, \epsilon > 0$, with $\gamma(0) = p.\space$ Now $\gamma_{1} \sim \gamma_{2}$ if $\dot{\gamma_{1}}(0)_{h} = \dot{\gamma_{2}}(0)_{h}$. Where $\space \dot{\gamma}(0)_{h} \equiv d/dt(h \circ \gamma)(0) \in \mathbb{R}^{n}$
Defn: The geometric Tangent space of a space X at p is the quotient $T_{p}^{geom}X \equiv K_{p} /\sim$ where is the equivalence relation of two curves $\gamma_{1},\gamma_{2}$ through point p.
The equivalence class of $\gamma \in K_{p}$ is denoted $[\gamma] \in T_{p}^{geom}X$ and is called the geometric tangent vector of X at p.
Thanks for any help at getting a better insight.
Brian
AI: Yes one have the usual definition of tangent vector of a curve but the tangent of a curve is defined if the curve lies in some $\mathbb{R}^n$ i.e. if $\gamma$ the curve then $\gamma :(0,1)\rightarrow\mathbb{R}^n $. Now when we are considering the tangent vector of a point in manifold then you have to consider the curve to be in that manifold. So my point is a priori you don't know whether the curve lies in $\mathbb{R}^n$ or not. In other words you don't know
whether every smooth manifold can be embedded in $\mathbb{R}^n$ or not.
P.S. The above statement is true. It is called called Whitney embedding theorem. |
H: Differentiable function and polynomials: Proof of $\phi (x)=ce^x + q(x)$ unclear to me.
I came across a proof which I can't quite understand:
If $p:\mathbb{R}\to\mathbb{R}$ is a polynomial of degree $n$ and $\phi:\mathbb{R}\to\mathbb{R}$ is a differentiable function with $\phi'=\phi+p$, there exists a polynomial $q$ of degree $n$ with a number $c\in\mathbb{R}$ with $\phi(x)=ce^x+q(x)$ for all $x\in\mathbb{R}$.
The proof is given as follows:
$$\phi'=\phi+p$$
$$\Leftrightarrow\phi'-\phi = p$$
$$\Leftrightarrow\phi'*e^{-x}-\phi*e^{-x}=p*e^{-x}$$
$$\Leftrightarrow\left(\frac{d}{dx}\right)[\phi*e^{-x}]=p*e^{-x}$$
$$\int\left(\frac{d}{dx}\right)[\phi*e^{-x}]=\int p*e^{-x}$$
$$\Leftrightarrow\phi*e^{-x}=q*e^{-x}+c$$
$$\Rightarrow\phi(x)=ce^x+q(x)$$
Isn't the notation (especially the integrals) missing something?
Why exactly are we multiplying by $e^{-x}$ in line 3? Is this a special technique?
How does the integral with $p$ in it suddenly yield $q$? And why is the arbitrary constant $c$ on the left hand side of the equation missing?
Also, the conclusion is unclear to me. How exactly do the $e^{-x}$ and $c$ disappear and how does the $ce^x$ appear?
Thank you very much in advance!
AI: Do you remember the product rule from calculus ? Using that we have $${d \over dx} \left( \phi*e^{-x}\right) = {d \over dx}\phi*e^{-x}+{d \over dx}{e^{-x}}*\phi$$
Is the motivation of why we multiplied $e^{-x}$ to the equation.
The reason why $p$ sudenly seems like it turned into a $q$, is because you integrated
$$\int p(x)*e^{-x}dx = q(x)*e^{-x}+c$$
Here, when you integrate the polynomial becomes something different.
For example using integration by parts,$$\int x*e^{-x}dx = -(1+x)e^{-x}+c$$
where $p(x) = x$ and $q(x)=-(1+x)$
Finally, the reason why it seems like $e^{-x}$ suddenly dissapeared, is because it hasn't. When we look at the very last step, all it's doing is multiplying $e^x$ on both sides. And $e^{-x}*e^x = 1$.
Does this help ? |
H: Prove that $C^{0}$ and $\mathbb{R}$ have equal cardinality
How to Prove that $C^{0}$ and $\mathbb{R}$ have equal cardinality ?
$C^{0}$ denote the set of all Continuous function $\mathbb{R} \rightarrow \mathbb{R}$
AI: Hint: A continuous function $f:\mathbb{R}\rightarrow\mathbb{R}$ is completely determind by its values on rational numbers $\mathbb{Q}$ (actually in any dense set of $\mathbb{R}$). the cardinality of the range of a function from $\mathbb{Q}$ is at most countable. |
H: Is $K=\{f_\lambda(x)=e^{\lambda x}\mid \lambda \in [0,a], x\in [0,b]\}$ equcontinuous?
I am trying prove that the following set is equcontinuous.
$K=\{f_\lambda(x)=e^{\lambda x}\mid \lambda \in [0,a], x\in [0,b]\}$
I read two prove of equcontinuty in two theorem and twice of theme have the hypothesis of compactness. So I have no idea prove this without compactness. Please help me with Hints or similar.
AI: Since $[0,a]\times[0,b]$ is compact the function $(\lambda,x) \longmapsto e^{\lambda x}$ is uniformly continuous. Hence we may choose $\delta$ so that $|x-y| + |\lambda_x-\lambda_y| < \delta$ implies $|e^{\lambda_x x} - e^{\lambda_y y}| < \epsilon$. Hence for any $\lambda$ take $\lambda_x=\lambda_y$ and if $|x-y|<\delta$ then $|f_\lambda(x)-f_\lambda(y)|<\epsilon$ which shows equicontinuity. |
H: The integral over a subset is smaller?
In a previous question I had $A \subset \bigcup_{k=1}^\infty R_k$ where $R_k$ in $\Bbb{R}^n$ are rectangles I then proceeded to use the following inequality $\left|\int_A f\right| \le \left|\int_{\bigcup_{k=1}^\infty R_k} f \right|$ which I am not really certain of. Does anyone know how to prove it? If its wrong what similar inequality should I use to prove the result here.
AI: The inequality is not true in general (think of an $f$ that is positive on $A$ but such that it is negative outside $A$). But it does hold if $f\geq 0$. This is not an obstacle to you using it, because you would just have to split your function in its positive and negative part. |
H: Quotient of Cayley graph of the free group on two generators by a subgroup.
If $F=F(\{a,b\})$ is the free group on two generators $a$ and $b$ and $G$ is the subgroup $$G=\:\langle b^n a b^{-n}|\: n\in \mathbb{N}\rangle \leq F$$
I am trying to work out what the quotient graph $\Delta / G$ looks like, where $\Delta = \Delta(F;\{a,b\})$, the Cayley graph of F with respect to it's generators.
I think that $\Delta$ is the 4-regular tree, and I think I understand that if you quotient out the Cayley graph by the group itself then all of the vertices become one and so you get a wedge of $|S|$ circles where $S$ is the generating set, but clearly when we quotient out by a smaller group, we see that not all vertices become one.
In fact, $G$ is not finitely generated (as far as I can tell), and so do we get a sort of infinite wedge of circles? I assume we would if we looked at the quotient $\Delta(G;S)/G$ for $S$ the set of generators of $G$, but we are taking a bigger initial group so I would think that it would still be an infinite wedge of circles, but with a 'bigger' infinite number... In that case its only vertex would have infinite valence.
Or do we just think of it as the Cayley graph of the finitely generated group with generators $a,b$ but an infinite set of relations (those given by the elements of $G$?) So is it just an infinite 4-regular graph?
AI: If you think of $F$ as acting by left multiplication on the vertices of $\Delta$, then the natural interpretation of the quotient graph is as a graph whose vertices are the distinct cosets $Gg$ of $G$ in $F$, with an edge (labelled $x$) from $Gg \to Ggx$ for each $x \in \{a,a^{-1},b,b^{-1}\}$.
So, if you ignore the labels, it is still a an infinite 4-regular graph, but it is no longer a tree, and it is not a simple graph, because there are edges labelled $a$ and $a^{-1}$ from the vertex $Gb^n$ to itself for each $n \in {\mathbb N}$. (Does your ${\mathbb N}$ include $0$?)
Note that if you take $n \in {\mathbb Z}$ rather than $n \in {\mathbb N}$, then you get a normal subgroup, and the only vertices are $Hb^n$. |
H: Isometry fixing two points of a geodesic line
Let $H$ be a hyperbolic space, and let $\Gamma \subset H$ be a geodesic line, i.e., the image of an isometry from $\mathbb{R}$ to $H$. If $f$ is an isometry of $H$ that fixes two distinct points of $\Gamma$, is it true that $f(\Gamma) = \Gamma$?
AI: The answer is yes. This is a simple application of the fact isometries send geodesics to geodesics, and every geodesic can be uniquely defined by two points in the union of the hyperbolic plane and its boundary circle.
If you've not seen either of these facts justified, feel free to ask and I'll try to spend the time explaining them.
To help clarify why every two points define a unique geodesic, I'll explain for the special case when both points $z$ and $w$ are in the hyperbolic plane $\mathbb{H}$. First, recall that geodesics in the hyperbolic upper half plane are either vertical Euclidean straight lines or semicircles with center on the real axis. Recall also that for every pair of points in $\mathbb{H}$, there exists at least one geodesic which passes through both points.
Suppose $z$ and $w$ share the same real coordinate and differ on the imaginary coordinate. Then, there are no semicircles with center on the real line which pass through both $z$ and $w$ and so it must be a vertical straight line which passes through them. It should be clear that such a line is unique.
Now, suppose that $z$ and $w$ differ on their real coordinate. There are a couple of ways that we could show that there is a unique geodesic which passes through both $z$ and $w$. One way would be to show that there is a unique $p\in\mathbb{R}\subset\partial\mathbb{H}$ such that $|z-p|=|w-p|$. This can be shown algebraically. You can also see this geometrically by drawing small circles around $z$ and $w$ of the same radius and then letting them grow continuously. It should be clear that there will be exactly one time when these two circles intersect on the real axis and they intersect at exactly one point.
The reason we want $|z-p|=|w-p|$ is because the point $p$ satisfies the necessary condition for the center of the semi circle passing through both $z$ and $w$. As $p$ is unique, and the length $|z-p|$ has to be the radius of such a semicircle, the geodesic passing through both $z$ and $w$ is uniquely determined by these two numbers.
The other way to see that a unique geodesic passes through $z$ and $w$ is to notice that there exists an isometry of the hyperbolic plane which maps both $z$ and $w$ to the imaginary axis. To describe this isometry, first take any geodesic $\Gamma_0$ which passes through both points (we know at least one geodesic passes through them) and call the end points $a$ and $b$ with $a<b$. Let $f$ be the isometry given by $$f(x)=\frac{x-b}{x-a}.$$ The isometry $f$ maps $b$ to $0$ and $a$ to $\infty$ and so must map $\Gamma_0$ to $\{x\:|\:\mbox{Re}(x)=0\}$ the imaginary axis. As $z,w\in\Gamma_0$, we also know that $f(z)$ and $f(w)$ are on the imaginary axis. By the previous discussion, $f(\Gamma_0)$ is the unique geodesic which passes through $f(z)$ and $f(w)$ but then $f^{-1}(f(\Gamma_0))$ must be the unique geodesic passing through $z$ and $w$ and so $\Gamma_0$ is unique.
From a quick search, you can find a proof of this as well in these lecture notes. |
H: $H$ is a subgroup of $G$ and every coset of $H$ in $G$ is a subgroup of $G$.Then which of the following is true?
$H$ is a subgroup of $G$ and every coset of $H$ in $G$ is a subgroup of $G$.Then which of the following is true?
(A) $H=${$e$}
(B) $H=G$
(C) $G$ must have prime order.
(D) $H$ must have prime order.
AI: Hint:-
(B) is true. every subgroup of a group must contain the identity and any two distinct cosets are disjoint. |
H: Lagrange Multiplier Problem - Distance from point With a Circle Constraint
Given a point $P:(x_0,y_0)$ in $\mathbb{R}^2$, and a constraint function
$$x^2+y^2=R^2$$where $R$ is the radius of the circle. The distance from $P$ to any point on the circle is to be minimized using the method of Lagrange Multiplier. The distance $d$ is given by
$$d(x,y)=\sqrt{(x-x_0)^2+(y-y_0)^2}.$$
Then the Lagrange function becomes
$$\mathcal{L}(x,y,\lambda)=d(x,y)+\lambda\ (x^2+y^2-R^2)$$
with the optimality conditions $\partial_x\mathcal{L}=0$, $\partial_y\mathcal{L}=0$ and $\partial_\lambda\mathcal{L}=0$. These conditions yield:
$$\left[(x-x_0)^2+(y-y_0)^2\right]^{-\frac{1}{2}}(x-x_0)+2\ \lambda\ x=0$$
$$\left[(x-x_0)^2+(y-y_0)^2\right]^{-\frac{1}{2}}(y-y_0)+2\ \lambda\ y=0$$
$$x^2+y^2-R^2=0$$
How should I continue from this point?
AI: Accepting @lab's hint, the Lagrange function can be rewritten as
$$\mathcal{L}(x,y,\lambda)=d^2(x,y)+\lambda\ (x^2+y^2-R^2)=(x-x_0)^2+(y-y_0)^2+\lambda\ (x^2+y^2-R^2)$$
Then the optimality conditions would become
$$\partial_x\mathcal{L}=2(x-x_0)+2x\lambda=0,\ \partial_y\mathcal{L}=2(y-y_0)+2y\lambda=0$$
$$\partial_\lambda\mathcal{L}=x^2+y^2-R^2=0$$
The first two conditions yield
$$x=\cfrac{x_0}{1+\lambda},\ y=\cfrac{y_0}{1+\lambda}\tag{1}$$
These can be substituted in the third condition to obtain
$$\cfrac{x_0^2+y_0^2}{(1+\lambda)^2}=R^2$$
Solving for $\lambda$ yields
$$\lambda=\left[\cfrac{x_0^2+y_0^2}{R^2}\right]^{\frac{1}{2}}-1$$
$\lambda$ can be substituted in Equation 1 to obtain
$$(x,y)=\left(x_0 \left[\cfrac{x_0^2+y_0^2}{R^2}\right]^{\frac{-1}{2}},\, y_0 \left[\cfrac{x_0^2+y_0^2}{R^2}\right]^{\frac{-1}{2}}\right)$$
This point represents the point closest to $P$ and the minimum distance can be found by using the original function
$$d(x,y)=\sqrt{(x-x_0)^2+(y-y_0)^2}.$$ |
H: Does weak*-convergence imply convergence of the operator norms?
Let $\mathcal A$ be a unital C*-algebra with topological dual $\mathcal A^*$ and denote the unit ball as $B_1^*:=\{\phi \in \mathcal A^* : \vert\vert \phi \vert\vert_{sup}\leq 1\}$.
If $\phi_n \rightarrow \phi$ is a weak*-convergent sequence with $\vert\vert \phi_n \vert\vert_{sup}= 1, \forall n \in \mathbb N$, does this imply that $\vert\vert \phi \vert\vert_{sup} = 1$?
AI: No. Consider the Fourier coefficient functionals on $C[0,2\pi]$.
Or if $H$ is a separable Hilbert space with orthonormal basis $(e_n)$, consider the functionals $T\mapsto\langle Te_1,e_n\rangle$ on $B(H)$.
In general you have $\|\phi\|\leq \liminf_n\|\phi_n\|$, but the inequality can be strict as these examples show. |
H: Let N1 and N2 are normal subgroups in the finite group G. Is it true that if N1≃N1 then G∖N1≃G∖N2.?
Let $N_1$ and $N_2$ are normal subgroups in the finite group $G$. Is it true that if $N_1 \simeq N_2$ then $G/ N_1 \simeq G/ N_2$?
AI: No, take $G=\mathbb{Z}_2\oplus\mathbb{Z}_4$, with $H=\langle (1,0)\rangle$ and $K=\langle (0,2)\rangle$ |
H: Prove $\lim_{n\to \infty}\frac{n}{n+1} = 1$ using epsilon delta
$\lim_{n\to \infty}\frac{n}{n+1} = 1$
Prove using epsilon delta.
AI: By your notation I believe you're talking about the sequence $(a_n)$ of elements of $\mathbb{R}$ defined by:
$$a_n =\frac{n}{n+1}$$
Now, limit for sequences has the following definition: "given $\varepsilon >0$ there's some $n_0 \in \mathbb{N}$ such that if $n > n_0$ then $|a_n - L|<\varepsilon$". So we want some $n_0$ such that:
$$\left|\frac{n}{n+1} - 1\right|<\varepsilon$$
Rewrite this as:
$$\left|\frac{n}{n+1} - \frac{n+1}{n+1}\right| = \left|\frac{1}{n+1}\right|$$
But $n$ is natural so that the thing inside of the module sign is already positive and so we want in truth that:
$$\frac{1}{n+1}<\varepsilon \Longrightarrow n>\frac{1-\varepsilon}{\varepsilon}$$
This part is the deduction part. Now we prove, we say: given $\varepsilon > 0$ take $n_0 =(1-\varepsilon)/\varepsilon$, then we have that for $n > n_0$:
$$\left|\frac{n}{n+1}-1\right| = \left|\frac{1}{n+1}\right| = \frac{1}{n+1}$$
But $n>n_0$ so that $1/(n+1) < 1/(n_0 + 1)$ and hence:
$$\left|\frac{n}{n+1} - 1\right| < \frac{1}{n_0 + 1} = \frac{1}{\frac{1-\varepsilon}{\varepsilon} + 1} = \varepsilon$$
The important part for you to note is that first we deduce which $n_0$ works, after that we throw this part away usually and just say: "take this $n_0$" so that we show that it really works as predicted. |
H: Finding radius of convergence for this power summation $\sum_{n=0}^\infty \left(\int_0^n \frac{\sin^2t}{\sqrt[3]{t^7+1}} dt\right) x^n$
I have been given this tough power summation that its' general $c_n$ has an integral.
I am asked to find the radius of convergence $R$
$$\sum_{n=0}^\infty \left(\int_0^n \frac{\sin^2t}{\sqrt[3]{t^7+1}} \, dt\right) x^n $$
Do I first calculate the integral?
Any help would be appreciated!
AI: we have
$$\left|\frac{\sin^2t}{\sqrt[3]{t^7+1}}\right|\leq \frac{1}{\sqrt[3]{t^7+1}}\sim_\infty t^{-7/3}$$
and since $\frac{7}{3}>1$ then the improper integral
$$\int_0^\infty\frac{\sin^2t}{\sqrt[3]{t^7+1}}dt$$
is convergent and the value of the integral is $\ell\neq 0$ so if we denote by
$$a_n=\int_0^n\frac{\sin^2t}{\sqrt[3]{t^7+1}}dt$$
then by ratio test we have
$$\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|=1$$
hence we have
$$R=1$$ |
H: When computing the Taylor series of $(\cos x)^2$ how does the slide jump to concluding it is $1-(\sin x)^2$?
In the following slide it shows how the taylor series of $(\cos x)^2$ is computed:
On the first line they simply take the taylor series of cosx and write it out twice, which makes sense. However, the order in which they multiply out the 1st line does not make sense to me.
For example, how did they get the 3rd term on the 2nd line: $(x^2/2!)^2$ ?
When I multiply out the 1st line I'm doing a straightforward distribution and I'm getting terms like this:
$$1 - (x^2/2!) + (x^4/4!) - (x^6/6!) - (x^2/2!) + (x^6/2!4!) + (x^{10}/2!6!) + (x^4/4!) - (x^6/4!2!) + (x^8/4!4!) - (x^{10}/4!6!)$$
When I combine the like terms above I do not end up with the 3rd line below and I was wondering if someone could point me in the right direction.
AI: The terms are arranged as follows:
$$
\begin{align}
\cos^2 x &= (a_1 + a_2 + a_3 + a_4\ldots)(a_1 + a_2 + a_3 + a_4\ldots)\\
&= a_1^2 + 2a_1a_2 + a_2^2 + 2a_1a_3 + 2a_2a_3 + 2a_1a_4 + \ldots
\end{align}
$$ |
H: Reduction modulo $p$ in number fields
For every prime number $p$, there exist a map
$$f:\mathbb{P}^n(\mathbb{Q})\to\mathbb{P}^n(\mathbb{F}_p)$$ defined
by: for $P\in \mathbb{P}^n(\mathbb{Q})$, we can find a unique tuple
$(x_1,\dots,x_n)\in\mathbb{Z}^n$ of coprime integers such that
$P=[x_1,\dots,x_n]$. Then
$f(P):=[\overline{x_1},\dots,\overline{x_n}]\in \mathbb{P}^n(\mathbb{F}_p)$.
Now, replace $\mathbb{Q}$ by a number field $K$ and $p$ by a prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$. A paper I read speaks of a map
$$f:\mathbb{P}^n(K)\to\mathbb{P}^n(\mathcal{O}/\mathfrak{p}).$$
But how is this defined ? For $P\in\mathbb{P}^n(K)$, we can find $(x_1,\dots,x_n)\in\mathcal{O}_K^n$ such that $P=[x_1,\dots,x_n]$, however, if $\mathcal{O}_K$ is not a UFD, we cannot necessarily choose them such that $x_i\not\in\mathfrak{p}$ for some $i$ as in the rational case...
AI: We cannot necessarily choose them such that $x_i \notin \mathfrak{p}$ for some $i$ as in the rational case...
Right, but we don't need to. In order for the reduction map to be defined we don't need $x_i \in \mathcal{O}_K$ for all $i$; we need $v_{\mathfrak{p}}(x_i) \geq 0$ for all $i$ and $v_{\mathfrak{p}}(x_i) = 0$ for at least one $i$. In other words, we can define the reduction map by replacing $R$ with its localization $R_{\mathfrak{p}}$, which has the same fraction field $K$, and is a DVR hence a UFD.
Note that this works for a nonzero prime ideal $\mathfrak{p}$ in any Dedekind domain $R$ with fraction field $K$. |
H: Difficult Continuity Multivariable Question
WIll you help me understand the following?
$ f(x,y)=\begin{cases}
\sin(y-x) & \text{for} & y>|x| \\ \\
0 & \text{for} & y=|x| \\ \\
\frac{x-y}{\sqrt{x^2 + y^2}} & \text{for} & y<|x|
\end{cases}
$
I need to check differentiability and continuity.
I tried substituting $x= \frac{1}{2} (u-v) , y=\frac{1}{2} (u+v)$ but it doesn't help me...
Will you help me figure this thing out?
Thanks !
AI: Hint:
Calculate $\lim\limits_{x\to{0}}{f(x,\,y)}$ along lines $y=kx$ for different $k.$ |
H: Logarithmic Equations
How does one go about solving:
$(5x+2)^{\frac{4}{3}} = 16$
I'm confused as how to parse through the equation to solve it using logs.
AI: \begin{align}
&(5x+2)^{\frac{4}{3}} = 16\\\implies
&5x+2 = (16)^{\frac{3}{4}}\\\implies
&5x+2 = (2^4)^{\frac{3}{4}}\\\implies
&5x+2 = 2^3\\\implies
&5x+2 = 8\\\implies
&5x= 6\\\implies
&x=\frac{6}{5}
\end{align} |
H: Explanation of where this trig identity comes from
I'm working on a problem but it's been a while since I last saw trig identities so I'd love some help or being pointed in the right direction.
Basically, I'd like to understand where this identity comes from;
$$\tan(2t) = \dfrac{2\tan(t)}{1 - \tan^2(t)}$$
Thanks for any help you can give - if it's useful to know the context of the problem, I'm writing a bit of code that converges on $\pi$ faster than Leibniz's series. (Please don't give too much away about the rest of the problem though, I'd like to get there myself :) )
AI: If we agree on the two identities:
$$\sin \left( 2 \theta \right) = 2 \sin \left(\theta \right) \cos \left(\theta \right)\\
\cos \left(2 \theta \right)=\cos^2\left(\theta \right)-\sin^2\left(\theta \right)
$$
then the rest of it is straight-forward.
$$\tan \left(2\theta \right)=\frac{\sin\left(2\theta \right)}{\cos\left(2\theta \right)}=\frac{ 2 \sin \left(\theta \right) \cos \left(\theta \right)}{\cos^2\left(\theta \right)-\sin^2\left(\theta \right)}=\frac{ 2 \sin \left(\theta \right) \cos \left(\theta \right)}{\cos^2\left(\theta \right)}\frac{1}{1-\tan^2\left(\theta \right)}=\frac{2 \tan{\left(\theta \right)}}{1-\tan^2\left(\theta \right)}$$
Those two identities can be proved in one step using Complex-Numbers. It is a well known identity that $e^{i \theta}=\cos{\left(\theta \right)}+i\sin{\left(\theta \right)}$.
Now consider this:
$$e^{2i\theta}=\cos{2\theta}+i\sin{2\theta}$$
On the other hand:
$$e^{2i \theta}=\left(e^{i\theta} \right)^2=\left(\cos{\left(\theta \right)}+i\sin{\left(\theta \right)} \right)^2=\left(\cos^2\left(\theta \right)-\sin^2\left(\theta \right)\right)+i\left(2 \sin \left(\theta \right) \cos \left(\theta \right)\right)$$
Bingo! |
H: how do determine the distribution of outcomes for a given probability?
For a game I generate various block types given certain odds. Say, there's a $0.001$ chance for the karma block. If a typical game has $600$ blocks, what's the distribution of games that have $0$ karma blocks, $1$ karma blocks, $2$ karma blocks, etc.
I've tried reading the wikipedia page on probability distributions, but I can't figure out how it applies to this.
AI: You want the binomial distribution because the probability of each block being karma or not-karma (whatever that means), that is, the probability of a success or a failure in each trial is constant and independent of the other trials.
In general, the probability that there are k successes out of n trials in a binomial distribution is ${n\choose k} p^k(1-p)^{n-k}$ (where p is the probability of a success). In your case the probability of 'k' "karma blocks" would be ${600 \choose k} (0.001)^k(0.999)^{600-k}$.
What's actually happening is you determine there must be exactly $k$ karma-blocks, and that probability is $0.001^k$. The rest of the 600 have to be failures, hence $0.999^{600-k}$ (because if it's not a success, it's a failure, 1-0.001=0.999). And you don't care about order so you multiply by the number of different orders there could be for k karma blocks in 600 blocks, which is $600 \choose k$ |
H: Why $s/(1-s) = 1$ at $s=1$ in bode plot?
Wolfram plot of $\frac{s}{1-s}$ is $\pm\infty$ at $s=1$. But, bode plot of $\frac{s}{1-s}$ results in $1$ at $s=1$. Obviously, this is wrong. Why?
AI: The x-axis in the Bode plot isn't $s$, but $\omega$. Remember that $s=j\omega$, so what you're seeing in the Bode plot is as follows:
$$20\log|H(j1)|=20\log|\frac{j1}{1-j1}|=20\log|\frac{j(1+j)}{2}|$$ |
H: Modulo question with negative
$60-88 \equiv \,\,? \pmod 5$
$60-88 = -28$
Then what do I do?
Please tell me how to answer this question. Thanks.
AI: When you have a negative number, like $-28 \pmod 5$, all you really need to do keep adding the modulus, or an integer multiple of the modulus $m$, to the negative number until it is zero or in $\mathbb Z_m = \{0, 1, \cdots, m-1\}.\;$
In this case, we can add any multiple of the modulus $5$ to $-28$ until we obtain zero or a positive number, if you are looking to represent the equivalence class of $-28$ modulo $5$ with the least positive integer $x$, $\;x \in \{0, 1, 2, 3, 4\}$.
So given $$60 - 88 = -28 \equiv x\pmod 5\tag{1}$$ and wanting to find such a representative $x$, we know that $5k \equiv 0 \pmod 5$, and simply choose $k = 6$ as our multiple of $5$ so we have $$5 \cdot 6 = 30 \equiv 0 \pmod 5\tag{2}$$
Adding $(1), (2)$:
$$30 - 28 = 2 \equiv x \pmod 5$$
That is, $60 - 88 = - 28 \equiv 2 \pmod 5$.
What this all means is that any multiple $n$ of 5, added to $2$ is congruent modulo $5$ to $2$, including $-28$: The solutionS to all $$x \equiv 2 \pmod 5 = \{x\mid x = 2 \pm 5n, n \in \mathbb Z\}$$ |
H: How many epsilon numbers $<\omega_1$ are there?
An epsilon number is an ordinal $\epsilon$ such that $\epsilon=\omega^\epsilon.$ What is the cardinality of the set of all epsilon numbers less than $\omega_1$?
I'm asking this because of a proof I've just read that seems to presuppose that there are countably many such ordinals, and it seems to me intuitively that there should be uncountably many (although I don't know how to prove it).
Added. OK, I've just understood that the proof I mentioned is OK even if there are uncountably many such ordinals, but I still don't see how I can find their number.
AI: Note that $\varepsilon_0$ is countable. So if there are only countably many countable $\varepsilon$ numbers, they would have a countable supremum, $\alpha$. Consider now the same construction as $\varepsilon_0$, starting $\alpha+1$. That is:
$$\sup\{\omega^{\alpha+1},\omega^{\omega^{\alpha+1}},\ldots\}$$
The result is itself an $\varepsilon$ number, and it is countable (as the countable limit of countable ordinals). But all the countable $\varepsilon$ numbers were assumed to be below $\alpha$, which is a contradiction. |
H: The 'compactness cardinal' of a space
I'm looking for references (and a name!) for the following invariant of a topological space $X$:
The least (infinite) cardinal $\kappa$ such that any open cover of $X$ has a subcover of cardinality less than $\kappa$.
For compact spaces, for example, this cardinal is $\aleph_0$.
AI: Such spaces have been called finally $\kappa$-compact. More generally, $X$ is $[\kappa,\lambda]$-compact if every open cover of $X$ of cardinality at most $\lambda$ has a subcover of cardinality less than $\kappa$. An $[\omega,\kappa]$-compact space is said to be initially $\kappa$-compact. Thus, Lindelöf spaces are finally $\omega_1$-compact, and countably compact spaces are initially $\omega$-compact. |
H: Isometries of a hyperbolic quadratic form
I am reading an article that says "The group of isometries (of a hyperbolic space) of a hyperbolic quadratic form in two variables is isomorphic to the semi-direct product $\mathbb{R} \rtimes \mathbb{Z}/2\mathbb{Z}$".
Could someone help me to understand that fact? First, it seems that the semi-direct product $\mathbb{R} \rtimes \mathbb{Z}/2\mathbb{Z}$ is equal to the direct product $\mathbb{R} \times \mathbb{Z}/2\mathbb{Z}$. And, what means a hyperbolic quadratic form? Is it simply a quadratic form defined in a hyperbolic space?
Thanks a lot.
AI: A hyperbolic quadratic form is a quadratic form such that the matrix, in a suitable basis, is
$$
A =
\left( \begin{array}{rr}
0 & 1 \\
1 & 0
\end{array}
\right).
$$
A two dimensional vector space over $\mathbb R$ with this bilinear form is often called a hyperbolic plane. Given two column vectors $x,y$ the value of the bilinear form is just
$B(x,y) = y^t A x.$ The associated quadratic form is $q(x) = x^t A x = B(x,x).$ We are using this way of writing the quadratic form as in page 13 of GERSTEIN and page 31 of GROVE. The result, for fields not of characteristic 2, is $$ B(x,y) = \frac{q(x+y) - q(x) - q(y)}{2} $$
Let's see, you did not type the result correctly. The orthogonal group, or group of isometries, or automorphism group, or group of automorphs of $q$ are those matrices $T$ of deteminant $\pm 1$ such that
$$ T^t A T = A, $$ page 26 of Gerstein.
You can work this part out yourself: the isometries with positive determinant, also called the special orthogonal group or the rotation group, are
$$
T =
\left( \begin{array}{rr}
a & 0 \\
0 & \frac{1}{a}
\end{array}
\right)
$$
for nonzero $a.$
The isometries with negative determinant are
$$
T =
\left( \begin{array}{rr}
0 & b \\
\frac{1}{b} & 0
\end{array}
\right)
$$
for nonzero $b.$
The thing you missed in the statement is that it is not $\mathbb R$ being used, it is the multiplicative group $\mathbb R^\ast$ of nonzero reals. Otherwise you have it. This is Theorem 6.1 on page 45 of Grove, or example 2.31 on page 27 of Gerstein.
The two other books I would recommend here are LAM and my favorite (because of number theoretic information) CASSELS
==================================
page 45 from Grove. If anyone knows how to crop the image, please feel free.
=================================== |
H: Proof that $1/\sqrt{x}$ is itself its sine and cosine transform
As far as I understand, I have to calculate integrals
$$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\cos \omega x \operatorname{d}\!x$$
and
$$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\sin \omega x \operatorname{d}\!x$$
Am I right? If yes, could you please help me to integrate those? And if no, could you please explain me.
EDIT: Knowledge of basic principles and definitions only is supposed to be used.
AI: Consider
$$\int_0^{\infty} dx \, x^{-1/2} e^{i \omega x}$$
Substitute $x=u^2$, $dx=2 u du$ and get
$$2 \int_0^{\infty} du \, e^{i \omega u^2}$$
The integral is convergent, and may be proven so using Cauchy's theorem. Consider
$$\oint_C dz \, e^{i \omega z^2}$$
where $C$ is a wedge of angle $\pi/4$ in the first quadrant and radius $R$. This integral over the closed contour is zero, and at the same time is
$$\int_0^R dx \, e^{i \omega x^2} + i R \int_0^{\pi/4} d\phi \, e^{i \phi} e^{-\omega R^2 \sin{2 \phi}} e^{i \omega R^2 \cos{2 \phi}} + e^{i \pi/4} \int_R^0 dt \, e^{-\omega t^2} = 0$$
The second integral, because $\sin{2 \phi} \ge 4 \phi/\pi$, has a magnitude bounded by $\pi/(4 \omega R)$, which vanishes as $R \to \infty$. Therefore
$$\int_0^{\infty} dx \, e^{i \omega x^2} = e^{i \pi/4} \int_0^{\infty} dt \, e^{-\omega t^2} = e^{i \pi/4} \sqrt{\frac{\pi}{\omega}}$$
Therefore
$$\int_0^{\infty} dx \, x^{-1/2} e^{i \omega x} = (1+i)\sqrt{\frac{2 \pi}{\omega}}$$
The Fourier cosine and sine transforms follow from taking the real and imaginary parts of the above. Note the dependence on $\omega^{-1/2}$ times some scale factor. |
H: Showing a simple property in statistics
I know this is very elementary but I cannot remember how to show $\hat\alpha$ as below.
AI: You minimize the squared error
$$\epsilon^T\epsilon=(y-X\alpha)^T(y-X\alpha)=y^Ty-2\alpha^TX^Ty+\alpha^TX^TX\alpha$$
This expression can be minimized by setting its derivative w.r.t. $\alpha$ equal to zero:
$$-2X^Ty+2X^TX\alpha=0\Rightarrow \alpha=(X^TX)^{-1}X^Ty$$ |
H: $n^5-n$ is divisible by $10$?
I was trying to prove this, and I realized that this is essentially a statement that $n^5$ has the same last digit as $n$, and to prove this it is sufficient to calculate $n^5$ for $0-9$ and see that the respective last digits match. Another approach I tried is this: I factored $n^5-n$ to $n(n^2+1)(n+1)(n-1)$. If $n$ is even, a factor of $2$ is guaranteed by the factor $n$. If $n$ is odd, the factor of $2$ is guaranteed by $(n^2+1)$. The factor of $5$ is guaranteed if the last digit of $n$ is $1, 4, 5, 6,$ $or$ $9$ by the factors $n(n+1)(n-1)$, so I only have to check for $n$ ending in digits $0, 2, 3, 7,$ $and$ $8$. However, I'm sure that there has to be a much better proof (and without modular arithmetic). Do you guys know one? Thanks!
AI: Your proof is good enough. There's a slight improvement, if you want to avoid modular arithmetic / considering cases.
$n^5 - n$ is a multiple of 5
$\Leftrightarrow$ $ n^5 + 10 n^4 + 35n^3 + 50 n^2 + 24 n = n^5 -n + 5(2n^4 + 7n^3 + 10n^2 + 5n) $ is a multiple of 5. The latter is just $n(n+1)(n+2)(n+3)(n+4)$, which is the product of 5 consecutive integers, hence is a multiple of 5.
Note: You should generally be able to do the above transformation, and can take the product of any 5 (or k) consecutive integers, if you are looking at a polynomial of degree 5 (or k). |
H: Integrate function $\int_{0}^{\infty}\lambda^{x}e^{-2\lambda}d\lambda$
How can I integrate this function? It's originated by an exponential prior and a poisson likelihood.
$\int_{0}^{\infty}\lambda^{x}e^{-2\lambda}d\lambda$
AI: This is related to the gamma function:
$$\Gamma(z) = \int_0^{\infty} dt \, t^{z-1} e^{-t}$$
In your case, sub $t=2 \lambda$, $d\lambda = t/2$ and get
$$2^{-(x+1)} \int_0^{\infty} dt \, t^x \, e^{-t} = 2^{-(x+1)} \Gamma(x+1) = \frac{x}{2^{x+1}} \Gamma(x)$$ |
H: What makes $5$ and $6$ so special that taking powers doesn't change the last digit?
Why are $5$ and $6$ (and numbers ending with these respective last digits) the only (nonzero, non-one) numbers such that no matter what (integer) power you raise them to, the last digit stays the same? (by the way please avoid modular arithmetic) Thanks!
AI: The problem is solving $x^2\equiv x\pmod{10}$, or $x(x-1)\equiv 0\pmod{10}$, which means finding integers $x$ such that $10$ is a factor of $x(x-1)$. For that to hold, either $x$ or $x-1$ must be a multiple of $5$, which means the last digit of $x$ is $0,1,5,$ or $6$. Then it is a simple verification that the equation holds in each of these cases.
Rephrased without "$\pmod{10}$" notation, this could be expressed as follows. We are looking for integers $x$ such that $x$ and $x^2$ have the same last digit. This is the same as saying that the last digit of $x^2-x=x(x-1)$ is $0$. That means that $x(x-1)$ is a multiple of $10$. See above for the rest. |
H: If we restrict the Heaviside step function to $\mathbb{R}\setminus\{0\},$ does it suddenly become continuous?
The Heaviside step function is discontinuous, despite that its continuous at every point except $0$.
Supposing we restricted it to $\mathbb{R}\setminus\{0\},$ does it suddenly become continuous?
I think 'yes,' but it seems a bit counterintuitive. I just want someone to confirm that I'm not fooling myself.
AI: Yep! Continuous at every point in the domain means continuous! Think of $\mathbb{R} \setminus 0$ as being two completely separate chunks of space. You probably wouldn't have a problem with the function that's zero on $[0,1]$ and one on $[3,4]$ being continuous. Something similar happens here. |
H: How to convert a geometric series so that exponent matches index of sum?
I need to convert the following series into a form that works for the equation $$\frac{a}{1-r}$$ so that I can calculate its sum. But the relevant laws of exponents are eluding me right now.
$$\sum_{n=1}^{\infty}\left(\frac{4}{10}\right)^{3n-1}$$
How do I get the 3 out of the exponent?
AI: Hint: $$\left(\frac{4}{10}\right)^{3n-1} = \left(\frac{4}{10}\right)^{3n}\left(\frac{4}{10}\right)^{-1} = \frac{10}{4}\cdot\left(\left(\frac{4}{10}\right)^{3}\right)^{n}$$ |
H: Prove that there is a unique $A\in\mathscr{P}(U)$ such that for every $B\in\mathscr{P}(U), A\cup B = B$
$U$ can be any set.
For the existence element of this proof, I have $A = \varnothing$
But it's for the uniqueness element of this proof where I am having trouble. So far I have:
$\forall(C\in\mathscr{P}(U))(\forall(B\in\mathscr{P}(U))(C\cup B=B)\rightarrow C\in\mathscr{P}(U) = A\in\mathscr{P}(U))$
Assume $C\in\mathscr{P}(U)$ is arbitrary and assume $\forall(B\in\mathscr{P}(U))(C\cup B=B)$ then:
What I have left to prove is:
$C\in\mathscr{P}(U) = A\in\mathscr{P}(U) $
which is where I am having trouble. I can say that $C = \varnothing$ which would then equal $A$ but it is not certain that $C = \varnothing$.
AI: For uniqueness: The claim has to hold in particular for $B = \varnothing$. So if $A \neq \varnothing$, what would happen? |
H: Determine whether this relation is reflexive, symmetric...
Determine whether this relation $R$ on the set of all integers is reflexive, symmetric, anti-symmetric and/or transitive where $x\,R\,y$ iff
$x = y + 1$ or $x = y-1$
It is not reflexive:
Let $x = 2$: $2\neq 2 + 1$ and $2 \neq 2 - 1$.
It is symmetric:
If $x = y + 1$ then $y = x - 1$
and if $x = y - 1$ then $y = x + 1$.
It is not anti-symmetric:
Let $x = 3$ and $y = 2$;
then $3 = 2 + 1$ ($x\,R\,y$) and $2 = 3 - 1$ ($y\,R\,x$)
And let $x = 2$ and $y = 3$;
then $2 = 3 - 1$ ($x\,R\,y$) and $3 = 2 + 1$ ($y\,R\,x$)
but $3\neq 2$.
Can anyone prove whether this relation is transitive or not?
thanks.
AI: You did fine with reflexivity, and with symmetry and antisymmetry.
Now, let's look at transitivity:
We can summarize the relation as follows: $xRy$ if and only if $x$ and $y$ differ by $1$.
So, suppose $xRy$ ($x$ and $y$ differ by one) and $yRz$ ($y$ and $z$ differ by one),
What may be the case about the difference between $x$ and $z$?
(Suspect a counterexample exists: just find $x, y, z$ such that $x = y - 1, y = z - 1 \implies x = z - 2$.
Or, vice versa, $x = y+1, y = z+1 \implies x = z+2)$
Let $x = 0$, $y = 1$, and $z = 2$, so we certainly have $x, y, z \in \mathbb Z$
Clearly, $x = y - 1$ since $0 = 1-1$, so $x R y$,
And $y = z - 1$, since $1 = 2 - 1$, so $y R z$.
But it is not true that $x = z + 1 $, since $0 \neq 2+1 = 3$ nor
does $x = z - 1$, since $0 \neq 2 - 1 = 1$.
Hence, $x$ is not related to $z$, and transitivity fails.
All we need is one counterexample to prove that a relation is non-transitive, and we've just found one such couterexample |
H: Proof of Real Number Property
In his introduction to Calculus, Apostol gives a foundation for the properties
of real numbers. After laying down the field and order axioms but before stating the least-upper-bound axiom, the author uses the set $T = \{x \ : \ 0 \leq x < 1\}$ as an example of a set which has no maximum element, yet has a least-upper-bound, namely, 1.
Is the author relying on intuition to make these assertions, or can we really prove these statements even before introducing the least-upper-bound axiom?
AI: The least upper bound axiom is not needed to prove that this specific, concretely-given set of real numbers has a least upper bound: all that’s needed is to prove that $x<1$ for each $x\in T$, which is a matter of definition, and that if $u<1$, then there is an $x\in T$ such that $u<x$. The latter can be done by taking $x=\frac12(u+1)$: that $u<x<1$ can be proved from the field and order axioms. |
H: Properties of limits when dealing with functions and parentheses .
My calculus instructor recently mentioned some odd properties of limits that I don't recall ever seeing, and seem alien to me. He says that the following statements are allowed:
$$\lim_{n\to\infty}\sin\left(\frac{1}{n}\right)=\sin\left(\lim_{n\to\infty}\frac{1}{n}\right)=\sin(0)=0$$
What rules allow for this? The only ones I've been made aware of previously were the following:
$$\lim_{n\to\infty}cf(n)=c\lim_{n\to\infty}f(n)$$
$$\lim_{n\to\infty}f(n)\pm g(n)=\lim_{n\to\infty}f(n)\pm\lim_{n\to\infty}g(n)$$
$$\lim_{n\to\infty}f(n)\cdot g(n)=\lim_{n\to\infty}f(n)\cdot\lim_{n\to\infty}g(n)$$
$$\lim_{n\to\infty}\frac{f(n)}{g(n)}=\frac{\lim_{n\to\infty}f(n)}{\lim_{n\to\infty}g(n)},\lim_{n\to\infty}g(n)\neq 0$$
AI: The general form is $\lim_{n \to \infty} f(a_n) = f(\lim_{n \to \infty}a_n)$.
You can use this if the limit $\lim_{n \to \infty} a_n = a$ exists and $f$ is continuous at $a$. (Which is the case in your example).
In fact, this is one of the equivalent definitions of continuity at a point for real valued functions. |
H: Why is the bridge index of the trefoil equal to 2?
It seems to me, all three 3 bridges are needed?
AI: The bridge index of a knot $K$ is the minimum number of local maxima (or equivalently, minimum number of local minima) of the height function on all possible knot diagrams of $K$. So from the standard picture of the trefoil, you can see that its bridge index is at most $2$. It is clear that the only knot with bridge index $1$ is the unknot, so in fact the bridge index of the trefoil $T_{2,3}$ is $2$. Here's a picture of $T_{2,3}$ with its standard $2$-bridge presentation: |
H: Show $z^4 + e^z = 0$ has a solution in $\{z \in \mathbb{C} : |z| \leq 2\}$
Show $z^4 + e^z = 0$ has a solution in $\{z \in \mathbb{C} : |z| \leq 2\}$.
I would like if in the proof the tools of algebraic topology were preferred over the other tools of analysis, complex analysis, algebra etc.
AI: The tools are the same. Let $f(z)=z^4+e^z$ and let $g(z)=z^4$. Show that $\dfrac{f}{|f|}$ and $\dfrac{g}{|g|}$ are homotopic as maps $\{z: |z|=2\} \to S^1$. Note that if $h\colon S^1\to S^1$ has nonzero degree, then $h$ has a root in $D^2$. (Why?) |
H: Master Theorem change of variables with root other than 2
I'm working on this:
$$T(n) = 12T(n^{1/3}) + \log(n)^{2}$$
Using change of variables, and substituting $m = \log n$, I get as far as:
$$S(m) = 12S(m/3) + m^{2}$$
I see how a square root would work but with a cube root I'm not sure that $\Theta(m \log m)$ makes sense since the convention seems to be that this means $\log_2 n$ and I'm not seeing how that accounts for a base $3$ $\log$.
AI: Let $n=3^m$ (that is, let $m=\log_3n$). Then by a change of variables, we have:
$$
T(3^m) = 12T((3^m)^{1/3}) + (\log(3^m))^{2} = 12T(3^{m/3}) + (m\log3)^{2} = 12T(3^{m/3}) + (\log3)^{2}m^2
$$
Renaming $S(m)=T(3^m)$ yields: $\boxed{S(m)=12S(m/3)+(\log3)^{2}m^2}$
Since $\log_3{12}>\log_3{9}=2$, it follows that $(\log3)^{2}m^2=O(n^{\log_3{12}-\epsilon})$ if $0<\epsilon\le\log_3{12}-2$. Thus, by Case 1 of the Master Theorem, we have $\boxed{S(m)=\Theta(m^{\log_3{12}})}$. Changing variables back to the original recurrence yields:
$$
T(n)=T(3^m)=S(m)=\Theta(m^{\log_3{12}})=\Theta((\log_3n)^{\log_3{12}})=\boxed{\Theta((\log n)^{\log_3{12}})}
$$
Using the identity $x^{\log_b{y}}=y^{\log_b{x}}$, we can alternatively write this as:
$$
T(n)=\boxed{\Theta(12^{\log_3{\log n}})}
$$ |
H: If $\lim_{x \to \infty} \frac{f'(x)}{x}=2$ does it follow that $\lim_{x \to \infty} \frac{f(x)}{x^2}=1$?
I need to show that the following statement is true or false. $$\displaystyle\lim_{x \to \infty} \frac{f'(x)}{x}=2 \Rightarrow \displaystyle\lim_{x \to \infty} \frac{f(x)}{x^2}=1$$
I considered $f(x)=x^2$, and it seems that the above statement is true. If so, how do I go about proving this? Or maybe there is a counterexample?
I appreciate your help.
AI: $\lim\limits_{x\to\infty}\dfrac{f'(x)}{x}=2$, therefore $\lim\limits_{x\to\infty}{f'(x)}$ equals $\infty$. Now apply De L'Hospital's rule to the second limits and the result follows. |
H: Solving the equation $10^{-x} = 5^{2x}$ with logarithms
$$10^{-x} = 5^{2x}$$
I'm having trouble isolating $x$. I get both logs on one side and then I'm stuck because I have nothing to divide with on the other side, and I can't factor it.
Thanks
AI: If $10^{-x} = 5^{2x}$, then $-x\log(10) = 2x\log(5)$ thus either $x=0$ or $-\log(10)=2\log(5)$ (or both). Since $-\log(10)\neq2\log(5)$ we must conclude that $x=0$. |
H: Simplified form of $\left(6-\frac{2}{x}\right)\div\left(9-\frac{1}{x^2}\right)$.
Tried this one a couple of times but can't seem to figure it out.
I am trying to simplify the expression:
$$\left(6-\frac{2}{x}\right)\div\left(9-\frac{1}{x^2}\right)$$
So my attempt at this is:
$$=\bigg(\dfrac{6x}{x}-\dfrac{2}{x}\bigg)\div\bigg(\dfrac{9x^2}{x^2}-\dfrac{1}{x^2}\bigg)$$
$$=\bigg(\dfrac{6x-2}{x}\bigg)\div\bigg(\dfrac{9x^2-1}{x^2}\bigg)$$
$$=\dfrac{6x-2}{x}\cdot\dfrac{x^2}{9x^2-1}$$
$$=\dfrac{(6x-2)(x^2)}{(x)(9x^2-1)}$$
$$=\dfrac{6x^3-2x^2}{9x^3-x}$$
This is the part that I get stuck at. I can't decide what to factor out:
$$=\dfrac{x(6x^3-2x^2)}{x(9x^3-x)}$$
$$=\dfrac{(6x^2-2x)}{(9x^2-1)}$$
Edit, missed a difference of squares:
$$=\dfrac{2x^2(6x^3-2x^2)}{x(9x^3-x)}$$
$$=\dfrac{2x^2(3x-1)}{x(3x-1)(3x+1)}$$
Giving a final answer of:
$$=\boxed{\dfrac{2x}{3x+1}}$$
AI: HINT: Pull out everything that you can: $6x^3-2x^2=2x^2(3x-1)$, and $9x^3-x=x(9x^2-1)$. Then notice that $9x^2=(3x)^2$, so that $9x^2-1=(3x)^2-1^2=(3x-1)(3x+1)$. Finally, do the cancellations that are now apparent. |
H: Definition of topological group via neighborhood base -- weird difference condition
It's been over 2 years since I've seriously done point-set topology, so my apologies if this is simple. I'm working out of Liu's Algebraic Geometry and Arithmetic Curves. My question concerns a definition on page 16.
Liu defines a topological group in the standard way. Let us only consider abelian groups, and define a topological group to be a group with a topological structure so that $(x,y)\rightarrow x-y$ form $G\times G$ to $G$ is continuous.
It is not hard to see that if $U$ is an open set and $x$ is an element, then $U\rightarrow x+U$ is a homeomorphism, so we can specify the topological on a topological group $G$ by giving a neighborhood base at $0$.
Let $S$ be such a neighborhood base at $0$ of some group $G$, and let $V$ be any element in $S$. Liu claims there must exist elements $V_1,V_2\in S$, so that $V$ contains $V_1-V_2$ so that subtraction is continuous, and that the existence of such elements will make subtraction continuous. If we consider the inverse of $V$, then we would have $V_1\times V_2 \subset f^{-1}(V)$, where $f$ is subtraction. But this only gives an open neighborhood of $0$ in the inverse, and we need an open neighborhood of every point. Certainly the existence of $V_1$ and $V_2$ is necessary for subtraction to be continuous in this topology, but why is it sufficient?
Here's an attempt, using the fact that the topology is homogeneous. Let $U$ be any open set. Letting $f$ be subtraction, we must show $f^{-1}(U)$ is open. That is, given any $(a,b)\in f^{-1}(U)$, we must show an open neighborhood of $(a,b)$ is contained in $f^{-1}(U)$. Suppose $f(a,b)=c\in U$. Consider $c^{-1}U$, a neighborhood of $0$. By hypothesis, there exist $V_1,V_2$, both neighborhoods of $0$, so that $f(V_1\times V_2)\subset c^{-1}U$. Then $(V_1+a,V_2+b)$ gives the desired neighborhood of $(a,b)$ in $f^{-1}(U)$, since $f((V_1+a,V_2+b))\subset U$.
AI: I'm confused by your "presumably ..." Use continuity of subtraction, taking the preimage $U$ of $V$, noting that $(0,0)\in U$, and so we get $V_1\times V_2\subset U$. OK, you edited, so now I have to inquire: $S$ is the basis at $0$, so what are you worrying about?
OK, given the latest version, given $(a,b)\in G\times G$, let $V+(a-b)$ be a nbhd of $a-b$ and check that subtraction maps $(a+V_1)\times (b+V_2)$ into it. |
H: Find triple integral over a tetrahedron constructed by 3 planes
The question is as follows:
Find triple integral of f(x,y,z) = xy + z by dxdydz
Over the tetrahedron D created by the following coordinates: $(0,0,0),
(1,0,0), (0,1,0)$ and $(0,0,1)$
My answer doesn't agree with the book's answer. I got $\frac{3}{20}$ while the books gives $\frac{1}{20}$. I suspect that my integrands for $dxdy$ are incorrect somehow, but I still don't quite see what I did wrong >_<
My work:
(1) The region $D$ looks like the following picture
(2) So my thoughts are the following:
a. To work with $dx$, I consider the $xy$ plane (or $yx$ plane). I note that $y = 0$ when $x = 1$ and $y =1$ when $x = 0$. So the upper bound is the line $y = - x + 1$, or $x = 1 - y$. The lower bound is $x = 0$. So $x$ goes from $0$ to $1-y$.
b. With $dy$, I consider the $yz$ plane (or $zy$ plane). Since $z = 0$ when $y = 1$ and $z=1$ when $y = 0$, the equation for upper bound of $y$ is $z = -y + 1$, or $y = 1- z$. So $y$ goes from $0$ to $1-z$
c. With $dz$, lower bound is $0$ and upper bound is $1$
So the triple integral becomes
$$ \int_0^1 \int_0^{1-z} \int_0^{1-y} (xy + z) dxdydz$$
And after some lengthy works, I got $\frac{3}{20}$
But the book's answer is $\frac{1}{20}$. So is my work wrong anywhere?
I strongly suspect that to work with $dx$, I should consider the $xz$ plane instead of $xy$ plane (I'm trying the "shooting arrow" method through the region)
Thank you in advance.
AI: The plane containing $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$ is $x+y+z=1$. Therefore, your interval for the x-axis is $x=0$ to $x=1-y-z$, for the y-axis it's $y=0$ to $y=1$, and for the z-axis it's $z=0$ to $z=1$.
Therefore your integral should be:
$$\int_{0}^{1}\int_{0}^{1-z}\int_{0}^{1-y-z}(xy+z)\space{dx}\space{dy}\space{dz}$$ |
H: Evaluating $\sum_{k>\frac{N}{2}}\frac {1}{N}\cdot \frac{N-k}{k}$
Assuming $N$ is even, how can I evaluate the following sum:
$$\sum_{k>\frac{N}{2}}\frac{\binom{N}{k}(k-1)!(N-k)!}{N!}\cdot\frac{N-k}{N}=
\sum_{k>\frac{N}{2}}\frac {1}{N}\cdot \frac{N-k}{k}$$
I really don't know how to do it...
Thanks!
(Not HW BTW)
AI: First thing I would do is set $N=2M$. As observed by i707107, the lhs is $0$ for $k>N$, so your summation is finite and splits as follows
$$
S_N=\sum_{k=M+1}^{2M}\frac{1}{2M}\cdot\frac{2M-k}{k}=\sum_{k=M+1}^{2M}\frac{1}{k}-\frac{1}{2M}\sum_{k=M+1}^{2M}1
$$
$$
=\sum_{k=1}^{2M}\frac{1}{k}-\sum_{k=1}^{M}\frac{1}{k}-\frac{1}{2M}\cdot M
=H_{2M}-H_M-\frac{1}{2}
$$
$$
=H_N-H_{N/2}-\frac{1}{2}
$$
where $H_N$ denotes the $N$th harmonic number. You can get asymptotics for the latter if you wish: $H_N=\log N+\gamma+o(1)$ where $\gamma\simeq 0.577\ldots$ is Euler-Mascheroni constant. It follows that
$$
S_N=\log N+\gamma -(\log(N/2)+\gamma)-\frac{1}{2}+o(1)=\log 2-\frac{1}{2}+o(1)
$$
$$
\longrightarrow\log 2-\frac{1}{2}\simeq 0.193147\ldots
$$
If you simply want to bound it from below as you say, you can use integral comparison
$$
S_N=-\frac{1}{2}+\sum_{k=M+1}^{2M}\frac{1}{k}\geq-\frac{1}{2}+\int_{M+1}^{2M+1}\frac{1}{t}dt=-\frac{1}{2}+\log\left(\frac{2M+1}{M+1}\right).
$$
Note: with the above integral comparison, completing the upper bound, you recover that the limit is $\log 2-\frac{1}{2}$ without knowing anything about Euler-Mascheroni constant. |
H: Given an semi-ellipse inscribed about a square, how do I find the equation of the ellipse?
Given the following diagram:
Where:
W = (-1, 0)
X = (-1, 2)
Y = (1, 2)
Z = (1, 0)
How can I find M?
The ellipse can be assumed to be a semi-ellipse with one of the foci on $\bar{XY}$. I'm guessing that this means that one focus is at (0, 2), with the other focus at (0, -2). Now, by the definition of an ellipse, I know that the sum of the distances from those two points to any other point on the ellipse is a constant. But, having reasoned that far, I've hit a dead end. Where do I go from here?
AI: Recall the equation of an ellipse: $\boxed{\dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} = 1}$
Furthermore, recall that if $c$ is the distance from the focus to the vertex, then: $\boxed{c^2=a^2-b^2}$
Since the focus is on $XY$, we know that $c=2$ so that $2^2=a^2-b^2$ or $\boxed{b^2=a^2-4}$.
Since $Y(1,2)$ is on the ellipse, we may substitute it into its equation, which yields:
$$ \begin{align*}
\dfrac{1^2}{a^2} + \dfrac{2^2}{b^2} &= 1 \\
\dfrac{1}{a^2} + \dfrac{4}{a^2-4} &= 1 \\
1(a^2-4)+4(a^2) &=a^2(a^2-4) \\
5a^2-4 &=a^4-4a^2 \\
0 &=a^4-9a^2+4 \\
a^2 &= \dfrac{9 \pm \sqrt{65}}{2} \\
a &= \sqrt{\dfrac{9 \pm \sqrt{65}}{2}} \quad (\approx 0.6847 \text{ or } 2.9208)
\end{align*} $$
However, notice that from the diagram, $M(a,0)$ lies to the right of $Z(1,0)$, implying that $a>1$. Hence, we reject the negative version and conclude that:
$$
\boxed{a = \sqrt{\dfrac{9 + \sqrt{65}}{2}} = 2.9208...}
$$ |
H: Convergence of series and disk or convergence
I applied the Ratio Test and I got
$$x + 2y < 1$$
Shouldn't this give me a half plane? The answer says it is (D). The only reason why I think it could be (D) is because $y$ could be positive or negative?
AI: What you want is $|x+2y|<1$ or $x+2y=-1$: this last option gives the alternated harmonic series. Note that $$ax+by+c<0$$ is the half "plane" determined by a line, and $ax+bx+c=0$ is a line. |
H: What is the relationship between $(u\times v)\times w$ and $u\times(v\times w)$?
Given three vectors $u$, $v$, and $w$, $(u\times v)\times w\neq u\times(v\times w)$. This has been a stated fact in my recent class. But what is the ultimate relationship between them? I would presume that one is a scalar multiple of the other?
AI: What you see here by the inequality of the left-hand side and the right-hand side is that the cross product (in this case, the vector triple product) is not associative $(\dagger)$:
$$(\vec u\times \vec v)\times \vec w\neq \vec u\times(\vec v\times \vec w)$$
Using boldface to represent the vectors:
$$ \mathbf{u}\times (\mathbf{v}\times \mathbf{w}) = (\mathbf{u}\cdot\mathbf{w})\mathbf{v} - (\mathbf{u}\cdot\mathbf{v})\mathbf{w}\tag{1}$$
$$(\mathbf{u}\times \mathbf{v})\times \mathbf{w} = -\mathbf{w}\times(\mathbf{u}\times \mathbf{v}) = -(\mathbf{w}\cdot\mathbf{v})\mathbf{u} + (\mathbf{w}\cdot\mathbf{u})\mathbf{v}\tag{2}$$
See Wikipedia for more about triple products of vectors
$(\dagger)$ "In mathematics the Jacobi identity is a property that a binary operation [like the triple vector cross product] can satisfy which determines how the order of evaluation behaves for the given operation. Unlike for associative operations, order of evaluation is significant for operations satisfying Jacobi identity. It is named after the German mathematician Carl Gustav Jakob Jacobi." (Wikipedia). |
H: A question regarding exponential distribution
Charlie and Bella and their friends Mark and Leonard each have a toy. Each toy breaks at a time that is exponentially distributed with expectation $24$ hours. Assume the toys are independent of each other.
What is the cumulative distribution function of the time until Charlie's toy fails?
Let $X$ be the time until Charlie's toy fails. $E[X]=\frac{1}{\lambda}=24$. Then $\lambda=\frac{1}{24}$. And so the cumulative distribution function is
$$F(x) = \begin{cases}
1-e^{-x/24}& \text{ if } x \ge 0, \\
0 & \text{ if } x \lt 0.
\end{cases}$$
Let $T$ be the time until all the toys have failed. Compute the cumulative distribution function of $T$.
The toys are independent of each other, so $E[T]=96$. Then $\lambda=\frac{1}{96}$. Hence the cumulative distribution function of $T$ is
$$F(t) = \begin{cases}
1-e^{-x/96}& \text{ if } t \ge 0, \\
0 & \text{ if } t \lt 0.
\end{cases}$$
Let $S$ be the time until the first toy has failed. Compute the cumulative distribution function of $S$.
Are the attempts above correct? How do I do the last one? Thank you.
AI: The first part is solved correctly.
Let $T$ be the time until all the toys have failed. The probability that $T\le t$ is the probability that all the toys have failed by time $t$. By independence this is the fourth power of the cdf you calculated in the first part. In symbols,
$$F_T(t)=\left(1-e^{-t/24}\right)^4.$$
The time $T$ is usually called $\max(X_1, X_2,X_3,X_4)$, where the $X_i$ are the lifetimes of the toys of the various people.
For the last, let $S$ be the time until the first failure. The probability this is $\gt s$ is the probability all the toys are alive at time $s$. This is $(e^{-s/24})^4$. So
$$F_S(s)=1-\left(e^{-s/24}\right)^4.$$
The time $s$ is usually called $\min(X_1, X_2,X_3,X_4)$.
Remark: In your solution of the second problem, you asserted that the mean of $T$ is the sum of the means of the $X_i$. There is no reason to think that. The mean of the sum of the $X_i$ is the sum of the means, but $T$ is not the sum of the $X_i$. You also assumed that $T$ has exponential distribution. It doesn't. However, $S$ does. |
H: Prove that $V$ is the direct sum of $W_1, W_2 ,\dots , W_k$ if and only if $\dim(V) = \sum_{i=1}^k \dim W_i$
Let $W_1,\dots,W_k$ be a subspace of a finite dimensional vector space $V$ such that the $\sum_{i=1}^k W_i = V$.
Prove: $V$ is the direct sum of $W_1, W_2 , \dots, W_k$ if and only $\dim(V) = \sum_{i=1}^k \dim W_i$.
AI: Define $\ \ B_i$ a basis of $ \ \ W_i\ \ $ for $\ \ i \in {1,..,k } $. Since sum( $W_i$)$= V$ we know that $\bigcup_{i=1}^k B_i$ is a spanning list of $V$.
Now we add the condition : sum dim($B_i$)$=$sum dim($W_i$)$=$dim($V$).
This means that $\bigcup_{i=1}^k B_i$ is a basis. (Spanning list of $V$ with same dimension).
Can you conclude from here the direct sum property?
If we define Basises as before and we know that $W_i$ is a direct sum can you conclude ( knowing that sum( $W_i$)$= V$ , wich is given) that $\ \ \bigcup_{i=1}^k B_i$ is a Basis of $V$? |
H: $\mathbb{Z}:= \mathbb{N} \cup (-\mathbb{N}) \cup \{0\}$, $a,b \in \mathbb{Z}$, when $a \le b$?
Let $\mathbb{Z}:= \mathbb{N} \cup (-\mathbb{N}) \cup \{0\}$, and $a,b \in \mathbb{Z}$, when $a \le b$?
Thanks in advance!!
AI: $$a\le b$$ if and only if $$b-a\in \mathbb{N}\cup \{0\}$$ |
H: Working backwards with Determinants.
The determinant of A is -2. Find $\det(3A^TA^3)$. You may leave your answer as a product of integers.
What I did was this:
$$\det(3A^TA^3)$$
$$\det(3A^T)\det(A^3)$$
$$3\det(A^T)(\det(A))^3$$
This is the part where I became confused;
$$3\det(A^T)(-2)^3 $$
That's as far as I got, and looking at the solution what I was doing was correct, however the solution for it does this;
$$\det(3A^T)\det(A)^{3}$$
$$(3^3)\det(A)\det(A)^{3}$$
$$(27)(-2)^4$$
$$(27)(16)$$
$$432$$
Now the part that confuses me about this question is , where does the other power of three come from in the line, and what happened to the transpose?
$$(3^3)\det(A)\det(A)^{3}$$
AI: Hint: $\det(A) = \det(A^{T})$ (can you see why this is?); furthermore, for an $n \times n$ matrix, $\det(kA) = k^{n}\det(A)$, where $k$ is a constant. |
H: Elementary Row Matrices
Let $A$ =
$$
\begin{align}
\begin{bmatrix}
-4 & 3\\
1 & 0
\end{bmatrix}
\end{align}
$$
Find $2 \times 2$ elementary matrices $E_1$,$E_2$,$E_3$ such that $A$ = $E_1 E_2 E_3$
I figured out the operations which need to be performed which are;
$E_1$ = $R_2 \leftrightarrow R_1$
$E_2$ = $R_2$ = $R_2$ + $4R_1$
$E_3$ = $R_2$ * $\frac{1}{3}$
My question is how would I go about writing the elementary matrices? The solution says that they are;
$E_1$ =
$
\begin{align}
\begin{bmatrix}
1 & -4\\
0 & 1
\end{bmatrix}
\end{align}
$
$E_2$ =
$
\begin{align}
\begin{bmatrix}
3 & 0\\
0 & 1
\end{bmatrix}
\end{align}
$
$E_3$ =
$
\begin{align}
\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}
\end{align}
$
AI: Hint: what do elementary matrices correspond to? Can you some how form a correspondence between the row operations you used to reduce the matrix and elementary matrices? In other words, the elementary matrices are related to how $R_{1}$ and $R_{2}$ are manipulated in each row reduction step. |
H: Cramer's Rule, 2x2 Matrix
Solve the following system using Cramer's Rule.
$$2x + y = 1$$
$$x - 4y = 14$$
I haven't done Cramer's rule for 2x2 matrices, but I figured that the same rules applied as in a 3x3, here's what I did;
$$\det(A) = -8-1 = -9$$
$D_x$ =
$
\begin{align}
\begin{bmatrix}
4 & 1\\
14 & -4
\end{bmatrix}
\end{align}
$
$
= -16-14 = -30
\quad \quad\quad D_y =
\begin{align}
\begin{bmatrix}
2 & 4\\
1 & 14
\end{bmatrix}
\end{align}
= 28 - 4 = 24
$
$$x = \frac{-30}{-9} = \frac{10}{3}; \quad\quad y = \frac{24}{3} = 8$$
Now the solution comes up with;
$Dx$ =
$
\begin{align}
\begin{bmatrix}
1 & 1\\
14 & -4
\end{bmatrix}
\end{align}
$
$Dy$ =
$
\begin{align}
\begin{bmatrix}
2 & 1\\
1 & 14
\end{bmatrix}
\end{align}
$
Which then results in a completely different answer, it's more likely that I made a mistake, could anyone point out what I did wrong?
AI: I think your thought process is just fine: however, I suspect that you may have transposed $1$ and $4$ in the respective columns for $D_x$ and $D_y$, which would account for the "discrepancy".
Did you mis-transcribe the value $4$ (right-hand side of equation 1), having meant to write $1$?
And if so, perform your calculations using Cramer's rule, using the same process, and I think you'll be "good to go!" |
H: How can I determine $\lim_{x\rightarrow 2} \frac{(x^3-5x^2+8x-4)}{x^4-5x-6}$?
This is the limit:
$$\lim_{x\to2}\frac{x^3-5x^2+8x-4}{x^4-5x-6}$$
Thank you.
AI: As I'm guessing you found out, the numerator and denominator both evaluate to $0$ at $x = 2$ $(\dagger)$. But the limit may nonetheless exist as $x$ approaches $2$. so whenever such a situation arises, that indicates more work needs to be done.
In this case, we can apply l'Hospital once, and evaluate, or
we can note that we have polynomials in each of the numerator and denominator, and it is quite likely that each can be factored, and perhaps, a common factor can be found: Recall the Rational Root Theorem.
Indeed, we can factor and will find both the numerator and denominator share a common, and in fact, it's exactly the a factor that evaluates to $0$ when evaluated at $x = 2:\;$ $(x - 2)$.
$$\lim_{x\to2}\frac{x^3-5x^2+8x-4}{x^4-5x-6} = \lim_{x\to 2} \frac{(x - 2)(x^2 - 3x + 2)}{(x - 2)(x^3 + 2x^2 + 4x + 3)} $$
Now we can eliminate the factor (cancel it from the numerator and denominator):
$$\lim_{x\to 2} \frac{(x - 2)(x^2 - 3x + 2)}{(x - 2)(x^3 + 2x^2 + 4x + 3)} = \lim_{x \to 2} \frac{x^2 - 3x + 2}{x^3 + 2x^2 + 4x + 3}$$
Can you take it from here?
$(\dagger)$ [If it were only the denominator that evaluated to zero, but the numerator evaluated to some non-zero real value, then the limit would approach either $+\infty$ or $-\infty$ or fail to exist altogether. In this case however, we have what is called an "indeterminate" form: meaning more work needs to be done before "evaluating" the limit.] |
H: Is there a divergent sequence such that $(x_{k+1}-x_k)\rightarrow 0$?
Is there a divergent sequence such that $\lim_{k\rightarrow\infty}(x_{k+1}-x_k)=0$?
AI: Let $f:(0,\infty)\to\mathbb R$ be any function such that $\lim\limits_{x\to\infty}f(x)=\infty$ and $\lim\limits_{x\to\infty}f'(x)=0$. Then $(f(1),f(2),f(3),\ldots)$ is an example. In particular, you could take $f(x)=\log(x)$ or $f(x)=\sqrt x$. |
H: Question about circles.
Find the value of x. If necessary, round your answer to the nearest tenth. The figure is not drawn to scale.
AB = 19, BC = 10, and CD = 5
A)23
B)53
C)38
D)58
What theorem should I use? And how do I use that theorem?
AI: AC = AB + BC = 19 + 10 = 29
if AC*BC=CD*(CD+X), then 29*10 = 5*(5+x)
290 + 25 + 5x
5x = 265
x = 53 |
H: Continuous Function and Open Subsets in $\mathbb R$
Let $E$ be a subset in $\mathbb R$, $f$ a real-value function on $E$.
Prove that $f$ is continuous on $E\iff$ for every open subset $V$ of $\mathbb R$, $f^{-1}(V)$ in open relative to $E$.
My question is about the ($\Rightarrow$) direction only.
Let $f$ be a continuous function on $E$ and $V$ a open subset on $\mathbb R$.
If $f^{-1}(V)=\{\}$, then it is open. Suppose that $f^{-1}(V)\neq\{\}$. Let $p\in f^{-1}(V)$.
Then $f(p)\in V$. Select $\epsilon$ such that $N_\epsilon(f(p))\subset V$.
My question is this. At this point, we do not know if $p$ is an element of $E$.
If $p\in E$, since $f$ is continuous on $E$, $\exists\delta$ such that $f(x)\in N_\epsilon(f(p))$ for all $x\in N_\delta(p)\cap E$.
Thus $N_\delta(p)\cap E\subset f^{-1}(V)$.
But, suppose that $p\notin E$. How do I know that the above statement is still true?
I tried the following:
Let $q\in E$ be a point such that $f(q)\in N_\epsilon(f(p))$
Select $\alpha$ such that $N_\alpha(f(q))\in N_\epsilon(f(p))$.
Then $\exists\delta$ such that $f(x)\in N_\alpha(f(q))$ for all $x\in N_\delta(q)\cap E$.
But this only shows that $N_\delta(q)\cap E\subset f^{-1}(V)$, not $N_\delta(p) ....$
I also thought about showing that if $p\notin E$, then $N_\delta(p)\cap E=\{\}$,
but I have no idea about how to do it.
AI: You must change only one step in your proof:
When you say:
"If $f^{-1}(V)=\{\}$ then it is open"
reeplace by
"If $f^{-1}(V)\cap E=\{\}$" then $f^{-1}(V)$ is open relative to $E$".
Then the following line must be
"suppose now $f^{-1}(V)\cap E\neq\{\}$ then exists $p\in f^{-1}(V)\cap E$"
and your problem was solved since $p\in E$. |
H: $\sum_{j=1}^{n} f\left(\frac{j}{n}\right) \cdot \chi _{\left[\frac{j-1}{n},\frac{j}{n}\right)} \longrightarrow f$ converges uniformly
Let $f:[0,1]\longrightarrow \mathbb{R}$ be a continuous function
and $\displaystyle f_n=\sum_{j=1}^{n} f\left(\frac{j}{n}\right) \cdot \chi _{\left[\frac{j-1}{n},\frac{j}{n}\right)}$
where $\chi _{\left[\frac{j-1}{n},\frac{j}{n}\right)}$ is the characteristic function of ${\left[\frac{j-1}{n},\frac{j}{n}\right)}$
How can we prove that $f_n \longrightarrow f $ uniformly ?
Any hints would be appreciated.
AI: Sure. $f$ is uniformly continuous on $[0,1]$, so, given $\epsilon>0$, there is ... (text hidden)
$\delta>0$ so that whenever $|x-x'|<\delta$, we have $|f(x)-f(x')|<\epsilon$. Just pick $N$ so that $1/N<\delta$. Then it should follow that whenever $n\ge N$, we have $|f_n(x)-f(x)|<\epsilon$ for every $x\in [0,1]$. The point is that $f_n(x)=f(j/n)$ where $x\in [\frac{j-1}n,\frac jn)$, and $|x-j/n|<\delta$. |
H: $f$ can be extended iff $\partial f = 0$
If
$0\rightarrow{A'}\rightarrow{A}\rightarrow{A''}\rightarrow{0}$
is an exact sequence of modules, then there exists an exact secuence
$0\rightarrow{}Hom(A'',B)\rightarrow{}Hom(A,B)\rightarrow{}Hom(A',B)\xrightarrow \partial{Ext}^1(A'',B)\rightarrow ...$
Suppose $A'\subseteq A$ and $f:A'\to B$. Prove that $f$ can be extended to $A$ if and only if $\partial f = 0$.
Any hint? Thanks!
AI: Let $i: A' \to A$ be the inclusion. If $f$ can be extended, then it's in the image of what map? What do you know about the composition of two consecutive maps in a complex?
Conversely, say $\partial f$=0. Then $f$ is in the kernel of $\partial$, but that sequence is exact, so... |
H: Span of an empty list
Its a simple question . But still I thought of asking it coz I don't see the logic behind it. Recently I read a statement "the Span of empty list ( ) equals {0}" .What we know is an empty list has no elements in it,so how can we even talk of a span of it as span is a linear combination of all the vectors in a list and in empty list no vector is present at all. So, is it kind of just a declaration that we make just like that?? i guess there is no logic behind it and we just assume it like that. But why? What's the reason of doing so? And why do we need to declare a span of an empty list anyway?? Sorry I am not getting the logic behind it and that's why I thought of asking you guys. Thanks.
AI: You can just assume it (or more correctly define it) that way.
That being said the span of a set $S \subset V$ in a vector space $V$ is just the smallest subspace of $V$ containing $S$, as such $\{0\}$ is the span of $\emptyset$. (Note that any subspace must contain $0$ by definition).
In terms of linear combinations you could think of the span of a set $S \subset V$ as the set $\{\lambda_1v_1 + ...\lambda_nv_n + 0 : n \in \mathbb N , v_1,...,v_n \in S, \lambda_1,...,\lambda_n \in \mathbb R\}$ (assuming $\mathbb R$ is the underlying field). This definition accommodates the possibility that the set $S$ is empty, as well as the possibility that the set $S$ is infinite. |
H: What is "exclusive neighborhoods"?
I am trying to read a proof but don't understand what does this mean. Thank you very much!
So here is the question I am trying to prove: "Show that for a Lefschetz map f on a compact manifold X, there can be only finitely many fixed points." And the proof is: "The graph of f is transversal to the diagonal inside X×X." Then I get lost here: "Now we take the preimage of the diagonal under the graph map from X to X×X. This is a 0-manifold inside X, and being a manifold can’t have limit points."
And "exclusive neighborhoods" appears in the followed proof:
Then (a) we can put an open set around each one containing no others, (b) the union of them is a closed set. Take the following list of open sets: the exclusive neighborhoods, plus the complement of the set of fixed points. This is an open cover of X, so by compactness it has a finite subcover, so there must have been finitely many of them. QED.
AI: I would guess two neighbourhoods $M$ and $N$ with $M\cap N=\emptyset$. |
H: Riemann zeta function at zero
Can the value of Riemann zeta function at 0, $\zeta(0)=-1/2$, be deduced from the identity $E(z)=E(1-z)$, where
$$E(z)=\pi^{-z/2}\Gamma(z/2)\zeta(z)?$$
AI: The functional equation under consideration yields:
$$
\zeta(s)=\frac{\pi^s}{\sqrt{\pi}}\frac{\Gamma\left(\frac{1-s}{2}\right)}{\Gamma\left(\frac{s}{2}\right)}\zeta(1-s).
$$
So the fact that $\zeta(0)=\lim_0\zeta(s)=-\frac{1}{2}$ follows from the following three ingredients:
$$
\zeta(s)\sim_1\frac{1}{s-1}\qquad \Gamma(s)\sim_0\frac{1}{s}\qquad \sqrt{\pi}=\Gamma\left(\frac{1}{2}\right)=\lim_{1/2} \,\Gamma(s).
$$
I assume you know the first two ones. The last one can be computed directly from the integral definition of $\Gamma$ by variable change $u=\sqrt{t}$, using $\int_0^{+\infty}e^{-u^2}du=\frac{\sqrt{\pi}}{2}$.
The so-called functional equation of Riemann zeta function is:
$$
\zeta(s)=2^s\pi^{s-1}\sin\left(\frac{\pi s}{2}\right)\Gamma(1-s)\zeta(1-s).
$$
Using
$$
\zeta(s)\sim_1\frac{1}{s-1}\qquad\sin\left(\frac{\pi s}{2}\right)\sim_0\frac{\pi s}{2}\qquad1=\Gamma(1)=\lim_1\,\Gamma(s)
$$
it follows that
$$
\zeta(s)\sim_02^0\pi^{-1}\frac{\pi s}{2}\Gamma(1)\left(\frac{-1}{s}\right)=-\frac{1}{2}.
$$ |
H: If $X_n\to X$ a.e. it does not follow that $\mu_n(P)\to \mu(P)$
If $X_n\to X$ a.e. and $\mu_n$ and $\mu$ are the p.m.'s of $X_n$ and $X$, it does not follow that $\mu_n(P)\to \mu(P)$ even for all intervals $P$.
I am having trouble coming up with an example that illustrates this. Why does Egoroff's theorem not guarantee that $\mu_n(P)\to \mu(P)$?
AI: Let's look at the measures we have in this problem.
We are working on $(\Omega, \mathcal{F},\mathbb{P})$ so when we say $X_n \to X$ a.s., we mean $\mathbb{P}$ almost surely. Then we have the laws $\mu_n$ of $X_n$ and $\mu$ of $X$.
Egorov's Theorem tells us we may find a set of $\mathbb{P}$ measure at least $1-\epsilon$ on which $X_n \to X$ uniformly. For sake of argument, suppose we could find a set of $\mathbb{P}$ measure $1$ on which $X_n \to X$ uniformly. Great, now what? The point is that the measure $\mathbb{P}$ isn't really related to $\mu_n = \mathbb{P}\circ X_n^{-1}$ since $X_n^{-1}$ can do whatever it pleases with sets before $\mathbb{P}$ sees them.
Here is a trivial example which demonstrates the conclusion may fail for an interval: If $X_n = 1/n$ for all $n$ with probability $1$, and $P=(0,1]$ then $\mu_n(P)=1 \not\to 0 = \mu(P)$. |
H: Proving that ${n}\choose{k}$ $=$ ${n}\choose{n-k}$
I'm reading Lang's Undergraduate Analysis:
Let ${n}\choose{k}$ denote the binomial coefficient,
$${n\choose k}=\frac{n!}{k!(n-k)!}$$
where $n,k$ are integers $\geq0,0\leq k\leq n$, and $0!$ is defined to be $1$. Prove the following assertion:
$${n\choose k}={n\choose n-k}$$
I proceded by making the adequate substitutions:
$${n\choose n-k}=\frac{n!}{\color{red}{(n-k)}!(n-\color{red}{(n-k)})!}$$
And then I simplified and achieved:
$$\frac{n!}{(n-k)!-k!}$$
But I'm not sure on how to proceed from here, I've noticed that this result is very similar to $\frac{n!}{k!(n-k)!}$. What should I do? I guess it has something to do with the statement about the nature of $n$ and $k$:
$n,k$ are integers $\geq0,0\leq k\leq n$
So should I just change the minus sign to plus sign and think of it as a product of $(n-k)!$?
$$\frac{n!}{(n-k)!-k!}\Rightarrow\frac{n!}{(n-k)!+k!}\Rightarrow \frac{n!}{k!(n-k)!}$$
I'm in doubt because I've obtained the third result on Mathematica, but obtained the first with paper and pencil. I'm not sure if there are different rules for simplification with factorials. I'm not sure if this $(n-k)!+k!$ mean a sum or a product in this case.
AI: You simply failed to distribute the negative sign (i.e., multiplying the difference by $-1$: $$\begin{align} n - (n - k) &= n + -1(n -k) \\ \\& = n + - 1\cdot n - (-1)\cdot k \\ \\
& = n - n + k \\ \\ &= k\end{align}$$
(I.e., Negation distributes over sums and differences.)
$${n\choose n-k}=\frac{n!}{\color{red}{(n-k)}!(n-\color{red}{(n-k)})!} =\frac{n!}{(n-k)!(n - n + k))!} =\frac{n!}{(n-k)!(k)!}= \frac{n!}{k!(n-k)!}$$ |
H: A tricky logarithms problem?
$ \log_{4n} 40 \sqrt{3} \ = \ \log_{3n} 45$. Find $n^3$.
Any hints? Thanks!
AI: By elementary arithmetic operations (after / describing next action):
$$\log_{4n}40\sqrt{3}=\log_{3n}45\ \ \ \mbox{ / definition of logarithm}$$
$$(4n)^{\log_{3n}45}=40\sqrt{3}\ \ \ \mbox { / } 4=\frac{4}{3}\cdot 3$$
$$\left({4\over 3}\cdot 3n\right)^{\log_{3n}45}=40\sqrt{3}\ \ \ \mbox { / }(ab)^c=a^cb^c$$
$$\left({4\over 3}\right)^{\log_{3n}45}\cdot (3n)^{\log_{3n}45}=40\sqrt{3}\ \ \ \mbox { / } a^{\log_ab}=b$$
$$\left({4\over 3}\right)^{\log_{3n}45}\cdot 45=40\sqrt{3}\ \ \ \mbox { / }\cdot\frac{1}{45}$$
$$\left({4\over 3}\right)^{\log_{3n}45}={8\over 9}\sqrt{3}\ \ \ \mbox { / }\left({a\over b}\right)^c=\frac{a^c}{b^c}$$
$$\left({4\over 3}\right)^{\log_{3n}45}=\left({4\over 3}\right)^{3\over 2}\ \ \ \mbox { / }a^b=a^c\Rightarrow b=c\ \ (a\neq 0,1)$$
$$\log_{3n}45=\frac{3}{2}\ \ \ \mbox { / definition of logarithm}$$
$$(3n)^{3\over 2}=45\ \ \ \mbox { / powered by } \frac{2}{3}\mbox{ and divided by }3$$
$$n=\frac{\sqrt[3]{45^2}}{3}\ \ \ \mbox { / powered by 3}$$
$$n^3=\frac{45^2}{27}$$
$$n^3=75$$ |
H: LU decomposition steps
I've been looking at some LU Decomposition problems and I understand that making a matrix A reduced to the form A=LU , where L is a lower triangular matrix and U is a upper triangular matrix, however I am having trouble understanding the steps to get to these matrices. Could someone please explain the method for LU Decomposition in detail, preferably excluding the concept of permutation matrices? ( We haven't talked about permutation matrices in class yet so our professor forbids us to use them for the decomposition).
AI: $LU$ decomposition is really just another way to say Gaussian elimination.
If you're familiar with that, putting the pieces together is easy.
Here is an example. Let
$$
A=A^{\left(0\right)}=\left[\begin{array}{ccc}
8 & 1 & 6\\
4 & 9 & 2\\
0 & 5 & 7
\end{array}\right].
$$
Proceed by Gaussian elimination. The first multiplier is $\ell_{2,1}=4/8=0.5$
(this is the multiplier that allows us to cancel $a_{2,1}=4$ using
the first row) and the second is $\ell_{3,1}=0/8=0$.
We arrive at
$$
A^{\left(1\right)}=\left[\begin{array}{ccc}
8 & 1 & 6\\
0 & 8.5 & -1\\
0 & 5 & 7
\end{array}\right].
$$
To cancel out $a_{3,2}^{\left(1\right)}=5$, we use the multiplier
$\ell_{3,2}=5/8.5\approx0.5882$ to yield
$$
A^{\left(2\right)}\approx\left[\begin{array}{ccc}
8 & 1 & 6\\
0 & 8.5 & -1\\
0 & 0 & 7.5882
\end{array}\right]
$$
which yields the $LU$ decomposition
$$
A=LU\approx\left[\begin{array}{ccc}
1 & 0 & 0\\
0.5 & 1 & 0\\
0 & 0.5882 & 1
\end{array}\right]\left[\begin{array}{ccc}
8 & 1 & 6\\
0 & 8.5 & -1\\
0 & 0 & 7.5882
\end{array}\right].
$$
Note that $L$ is just made up of the multipliers we used in Gaussian
elimination with $1$s on the diagonal, while $U$ is just $A^{\left(2\right)}$. |
H: When does $e^{f(x)}$ have an antiderivative?
today I tried to integrate $x^x$ by applying a reverse chain rule which turned out to be false. I was told $\int e^{f(x)}\,dx$ can be done when $f(x)$ is linear. This made me wonder what conditions we can find so that $\int e^{f(x)}\,dx$ can be expressed in terms of elementary functions, but I'm not sure what to do.
AI: When $f$ is a polynomial, it is a consequence of a celebrated theorem by Liouville that $e^{f(x)}$ has an elementary antiderivative iff there is a polynomial $h$ such that $1=h'+hf$. This implies that $f$ has degree at most 1. In particular, the function $e^{x^2}$ does not have an elementary antiderivative.
See How can you prove that a function has no closed form integral?. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.