text
stringlengths 83
79.5k
|
---|
H: Lottery Competition
Suppose I want to hold lottery competition. I just want two people to win the lottery and there are 100 people buying my lottery. What should the maximum probability (of each player wining the lottery) be to ensure that exactly two people win the lottery?
I obviously think it's $2/100=1/50$. But I think it should not be so trivial. I would appreciate any comments.
AI: For the new question, you have a binomial distribution. If the probability that each player wins is $x$, the probability of exactly $2$ winning is ${100 \choose 2}x^2(1-x)^{98}$. The first term is the number of ways to choose the winners, the $x^2$ is the probability they both win, and the $(1-x)^{98}$ is the probability that all the others lose. Now take the derivative and set to zero. Yes, you should get $1/50$ |
H: On the proof of Schur's lemma in Fulton & Harris
I'm reading the book on representation theory by Fulton and Harris. I'm stuck with the proof of Schur's Lemma 1.7:
Schur's lemma 1.7 If $V$ and $W$ are irreducible representation of $G$ and $\phi:V\rightarrow W$ is a $G$-module homomorphism, then
1. Either $\phi$ is an isomorphism, or $\phi = 0$.
2. [...]
Proof. The first proof follows from the fact that the kernel of $\phi$ and the image of $ \phi$ are invariant subspaces. [...]
I understand that a $G$-homomorphism is a synonym for a map between two representations. I figured out why the kernel and the image are invariant (under any $g\in G$), by I have no idea why that fact leads to the conclusion.
I would be most grateful if you could fill in the gap.
AI: Since the kernel is an invariant subspace and $V$ is a simple representation, the kernel is $0$ or $V$.
Since the image of $\phi$ is an invariant subspace of $W$, it is either $0$ or $W$.
If $\phi$ is nonzero, then $\ker(\phi)=0$ and $Im(\phi)=W$.
If $\phi$ is zero, then $\ker(\phi)=V$ and $Im(\phi)=0$. |
H: On the definition of divisors in Riemann Surfaces
The sum notation for a Divisor $D$ in a Riemann Surface $X$ (as in Miranda's "Algebraic Curves and Riemann Surfaces") is
$$
D=\sum_{p\in X} D(p)\cdot p
$$
That is, $D$ assumes the value $D(p)$ at $p$. For example, a principal divisor of $f$ is the divisor
$$
div(f)=D=\sum_{p\in X} ord_p(f)\cdot p
$$
which, if I understood it correctly, means that this divisior is the function
$$
\begin{array}{rcc}
div(f):&X&\rightarrow&\mathbb{Z}\\
&p&\mapsto &ord_p(f)
\end{array}
$$
Now, defining the divisor of zeroes and the divisor of poles as
$$
div_0(f)=\sum_{p\text{ with }ord_p(f)>0} ord_p(f)\cdot p
$$
and
$$
div_{\infty}(f)=\sum_{p\text{ with }ord_p(f)<0} (-ord_p(f))\cdot p
$$
respectively, how are those functions defined on the points $x\in X$ such that $ord_x(f)\leq 0$ (in the first case) or $ord_x(f)\geq 0$ (on the second)? The question arises since the divisor must be defined in the whole Riemann Surface $X$.
Thanks in advance.
AI: For points $p$ that don't appear in the summation defining the divisor of zeros (resp. the divisor of poles) the value of the divisor is zero. This ensures that $\mathrm{div}(f)=\mathrm{div}_0(f)-\mathrm{div}_\infty(f)$.
In general, when working with a free abelian group on some set $X$ (the Riemann surface in your example), if one writes down an expression which sums over a subset of $X$, it is to be understood that the coefficients corresponding to the other elements of $X$ are all zero. |
H: Good source for Triebel-Lizorkin spaces?
I'm trying to look into different types of function spaces. At the moment, at least for function spaces involving integration, I only have $L^p$ and $W^{k,p}$. The next function spaces I thought I'd read about are Besov spaces and Triebel-Lizorkin spaces. Admittedly I usually look at wikipedia and Wolfram. However, wikipedia has next to nothing on Triebel-Lizorkin spaces, Wolfram has absolutely nothing on Triebel-Lizorkin and very little on Besov spaces.
Does anyone know of a good source for Triebel-Lizorkin spaces? And maybe Besov spaces too. Google just gives me these crazy articles which are way above my head.
AI: There's the books by Triebel: Theory of Function Spaces, II, III. |
H: For two disjoint compact subsets $A$ and $B$ of a metric space $(X,d)$ show that $d(A,B)>0.$
I was thinking about the following problem: For two disjoint compact subsets $A$ and $B$ of a metric space $(X,d)$ show that $d(A,B)>0.$ I'm having doubt with my attemp. Please have a look and do comment:
Let $d(A,B)=\inf\{d(a,b):a\in A,b\in B\}=0.$ Then $\exists$ sequences $\{a_n\}\subset A,\{b_n\}\subset B$ such that $0\le d(a_n,b_n)<\dfrac{1}{n}~\forall~n.$ Since $A$ is compact, $\exists$ a convergent subsequene $\{a_{r_n}\}$ of $\{a_n\}$ converging to some $a\in A.$ Then $0\le d(a_{r_n},b_{r_n})<\dfrac{1}{r_n}~\forall~n.$ Similarly since $B$ is compact, $\exists$ a convergent subsequene $\{b_{r_{n_m}}\}$ of $\{b_{r_n}\}$ converging to some $b\in B.$ Then $0\le d(a_{r_{n_m}},b_{r_{n_m}})$$<\dfrac{1}{r_{n_m}}~\forall~m.$ We note that $r_{n_m}\ge n_m\ge m>0$$\implies0<\dfrac{1}{r_{n_m}}<\dfrac{1}{m}\to0$$\implies\dfrac{1}{r_{n_m}}\to0.$ Using the sqeezing lemma once again we can see that $\exists$ convergent sequences $\{a_n\}\subset A$ and $\{b_n\}\subset B$ such that $a_n\to a\in A,b_n\to b\in B.$ $\exists$ convergent sequences $\{a_n\}\subset A$ and $\{b_n\}\subset B$ such that $a_n\to a\in A,b_n\to b\in B.$ Then $d(a_n,b_n)\to d(a,b)$ Consequently, $d(a,b)=0\implies a=b,$ a contradiction to $A\cap B=\emptyset.$
AI: Your solution is correct. Here is another one.
Assume $d(A,B)=0$. Consider function $d(x, B)=\inf\{d(x,y):y\in B\}$ It is continuous, and defined on compact metric space $A$. So it attains its minimum, i.e. there exist $a\in A$ such that $d(a,B)=\inf\{d(x,B):x\in A\}=d(A,B)=0$. Since $d(a,B)=0$, then $a\in \operatorname{cl}_X(B)$. Since $B$ is compact it is closed, so $a\in \operatorname{cl}_X(B)=B$. Contradiction, since $A\cap B=\varnothing$. |
H: Non trivial Automorphism
Prove that every finite group having more than two elements has a nontrivial Automorphism.
It is from Topics in Algebra by Herstein. I am not able to solve.
AI: Suppose that $G$ is abelian with exponent greater than 2 (i.e. not every element has order 2). Then mapping $g\in G$ to $g^{-1}$ is an automorphism. It's clearly bijective, and since $G$ is abelian it is indeed a homomorphism.
If $G$ is abelian with exponent 2, then it is a vector space over $\mathbb Z/2$. Choose a basis for this vector space, and pick your favourite two basis vectors to interchange. This is a nontrivial automorphism.
If $G$ is not abelian, then the center of $G$, $Z(G)\neq G$. Now, let $G$ act on itself via conjugation. These are all automorphisms. Convince yourself that if $Z(G)\neq G$, there is a nontrivial one. |
H: Density of the image and closedness of the inverse of a bounded linear operator
Let $A \colon X \to X$ be a bounded linear operator, where $X$ is a Banach space.
$(Q1)$ Is it true that if $A$ is injective then the image of $A$ is dense in $X$?
$(Q2)$ Is it true that $A^{-1} \colon \text{rgA} \to X$ is closed?
My attempts of solution:
$(A1)$ I have no idea.
$(A2)$ I know that if $A$ is bounded then it is closed, i.e. if $$x_n \to x,\ \text{and}\ Ax_n \to y,\ \Longrightarrow x \in D(A)( = X)\ \text{and}\ Ax = y.$$
Let $y_n \to y$ and $A^{-1}y_n \to w$. $(*)$
By definition, there exist $\{x_n\}$ such that $Ax_n = y_n$ substituting this into $(*)$ we get $$Ax_n \to y,\ \text{and}\ A^{-1}Ax_n = x_n \to w.$$
By closedness of $A$ this implies that $w \in X$ (obviously) and that $Aw = y$, i.e. $w = A^{-1}y$.
Do you have any idea for $(Q1)$? is $(A2)$ correct?
Thank you very much for your time and help!!!
AI: (A2) is correct. Note that it assumes that $A$ be injective. And that $X$ be a normed vector space suffices.
For (Q1), a natural family of counterexamples is given by non surjective isometries on a Banach space, since their ranges are closed proper subspaces. For instance, the unilateral shift operator $S:\ell^2(\mathbb{N})\longrightarrow \ell^2(\mathbb{N})$
$$
S:(x_0,x_1,x_2,\ldots)\longmapsto (0,x_0,x_1,x_2,\ldots).
$$ |
H: If finer topology is second-countable then coarses is second-countable
Let $(X,\tau_1)$ and $(X,\tau_2)$ be topological spaces and $\tau_2$ finer than $\tau_1$. Prove if $\tau_2$ is second-countable then $\tau_1$ is also second-countable.
My try:
Let $B=\{B_n,n\in \mathbb{N}\}$ be countable basis for $\tau_2$. Let's prove that $B_1=B\cap \tau_1$ is countable basis for $\tau_1$. It's countable because it's subset of countable set.
And now I'm stuck, because I don't know how to prove it's basis for $\tau_1$. I tried to prove it like this: Let $U\in\tau_1\subset\tau_2$ that means there is $B'\subseteq B$ so $U=\cup B'$. What next? How to show it's also union of elements of $B_1$ ?
AI: Will a counterexample do? Let $\tau_2$ be the discrete topology on $\mathbb{N}$, and suppose $\tau_1$ is a non-first-countable (equivalently, non-second-countable) topology on $\mathbb{N}$. Since $\tau_2$ is second-countable and finer than $\tau_1$, this will be a counterexample to what you wanted to prove. So all we have to do is find a non-first-countable topology on $\mathbb{N}$. Here are two ways to do that.
I. Let $\mathcal{U}$ be a uniform ultrafilter on $\mathbb{N}$. Call a set $X\subseteq\mathbb{N}$ open if either $1\notin X$ or else $X\in\mathcal{U}$.
II. For $X\subseteq\mathbb{N}$ let $d(X)$ be the asymptotic density of $X$. Call a set X open if either $1\notin X$ or else $d(X)=1$.
In words: we make the topology discrete except at one point. For the neighborhood system of the special point we take some filter which is not countably generated. |
H: Why do we use open intervals in most proofs and definitions?
In my class we usually use intervals and balls in many proofs and definitions, but we almost never use closed intervals (for example, in Stokes Theorem, etc). On the other hand, many books use closed intervals.
Why is this preference? What would happen if we substituted "open" by "closed"?
AI: My guess is that it's because of two related facts.
The advantage of open intervals is that, since every point in the interval has an open neighbourhood within the interval, there are no special points 'at the edge' like in closed intervals, which require being treated differently.
Lots of definitions rely on the existence of a neighbourhood in their most formal aspect, like differentiability for instance, so key properties within the result may require special formulation at the boundary.
In PDE/functional analysis contexts for example boundaries are very subtle and important objects which are treated separately. |
H: Is there a difference between 'inconsistent', 'contrary', and 'contradictory'
Is there a difference between 'inconsistent' 'contrary' and 'contradictory'? As far as I understand, two statements are inconsistent when they can not both be true; two statements are contradictory when they can not both bear the same truth value.
Does 'contrary' have a unique meaning?
AI: A contradiction is an pair of statements of the form $p$ and $\lnot p$.
A set of statements is contradictory if there is a pair of statements in it that form a contradiction.
A set of statements is inconsistent if you can derive a contradiction from them.
We will sometimes use the terms "contradiction" and "contradictory" more loosely to mean "obviously inconsistent."
"Contrary" is used informally to mean the negation of a statement. I don't think I've ever seen a set of statements called "contrary," for example, just because there is a contradiction. Perhaps there is a formal use of the term that I am missing. |
H: Finding the indefinite integral $\int \frac{3x+2}{(6x^2+8x)^7}\,\mathrm dx$
I'm not too familiar with how to solve this. Could anyone present a step by step guide on how to get the answer?
$$\int \dfrac{3x+2}{(6x^2+8x)^7}\,\mathrm dx$$
AI: Let $u = 6x^{2} + 8x$. Then $du = (12x + 8) dx \rightarrow du = 4(3x + 2) dx$. This is equivalent to $\frac{du}{4} = (3x + 2) dx$
Make the substitution for the integral, so we have:
$$\int \frac{((3x + 2) dx)}{u^{7}}$$
$$= \int \frac{1/4 du}{u^7}$$
$$= \frac{1}{4} \int \frac{du}{u^7}$$
$$= \frac{1}{4} \int u^{-7} du$$
Thus, by power rule, we have:
$$\frac{1}{4} \frac{u^{-7 + 1}}{-7 + 1} + c$$
$$= \frac{1}{4} \frac{u^{-6}}{-6} + c$$
$$= \frac{-1}{24} u^{-6} + c$$
$$= \frac{-1}{24} (6x^{2} + 8x)^{-6} + c \text{ where c is an arbitrary constant}$$
In terms of positive exponent, we obtain:
$$\frac{-1}{24(6x^{2} + 8x)^{6}} + c$$ |
H: Show that $\int_0^t\!\!\left(t^2-x^2\right)^n\mathbb{d}x=\frac{\sqrt{\pi}}{2}t^{2n+1}\frac{\Gamma(n+1)}{\Gamma(n+\frac{3}{2})}$
The question asks to prove the identity:
$$\int_0^t\!\!\left(t^2-x^2\right)^n\mathbb{d}x=\frac{\sqrt{\pi}}{2}t^{2n+1}\frac{\Gamma(n+1)}{\Gamma(n+\frac{3}{2})}$$
where $n\in\mathbb{Z}$
I have no idea where to start. Thank you for your help!
AI: The key formula in this kind of problems is the beta function evaluation
$$B(p,q)=\int_0^1s^{p-1}(1-s)^{q-1}ds=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}.$$
To guess what kind of change of variables may reduce your integral to this one, consider zeros of the integrand and compare with the bounds of integration. In your case the change of variables is very easy to guess: setting $s=x^2/t^2$, we obtain
\begin{align}
\int_0^t\left(t^2-x^2\right)^n dx=\frac12 t^{2n+1}\int_0^1 s^{-\frac12}(1-s)^nds=
\frac12 t^{2n+1}B\left(\frac12,n+1\right)=\\
=\frac12 t^{2n+1}\frac{\Gamma\left(\frac12\right)\Gamma(n+1)}{\Gamma\left(n+\frac32\right)}=\frac{\sqrt{\pi}}{2} t^{2n+1}\frac{\Gamma(n+1)}{\Gamma\left(n+\frac32\right)},
\end{align}
where it is also used that $\Gamma\left(\frac12\right)=\sqrt{\pi}$. |
H: Find the limit as n approaches infinite
We have the following function:
$$U_n = \sin \dfrac{1}{3} n \pi$$
What is the limit of this function as n approaches infinity?
I first tried to use my calculator as help, for n I chose some arbitrary large numbers, such as 100 and 1000. Then I I just took $n = 10^{50}$ and it gave me an error.
So the correct answer is it doesn't have one, but why? Why does this function have a solution for $n = 10^2, 10^3$ and not for bigger numbers such as $n=10^{50}$?
AI: As David hinted in his comment, try using the fact that
$$
\sin(x + 2\pi k) = \sin(x)
$$
whenever $k$ is an integer. Or, if you just write out the first few values of $U_n$ (compute these using the unit circle) and you should notice a pattern. |
H: Integral of $\int \frac{x^4+2x+4}{x^4-1}dx$
I am trying to solve this integral and I need your suggestions.
$$\int \frac{x^4+2x+4}{x^4-1}dx$$
Thanks
AI: welcome to math.stackexchange this question were answered already.
Here is the link
use polynomial division, we get $$\int \frac{x^4+2x+4}{x^4-1} dx = \int 1 + \frac{2x+5}{(x^2 - 1)(x^2 + 1)}dx
= \int 1 + \frac{2x+5}{(x+1)(x-1)(x^2+1)} dx $$
Expressing this as partial fractions, we need only find $A, B, C$
$$= \int \left(1 + \frac{A}{x+1} + \frac B{x-1} +\frac{CX+D}{x^2 + 1}\right)\,dx$$ |
H: Eigenvalues of $\sum_{i=1}^n \frac{(x_i - x_{i-1})^2}{\lambda_i}$
Consider the cuadratic form
$$
\mathbf{x}^{\intercal}Q\mathbf{x} = \frac{x_1^2}{\lambda_1} + \sum_{i=2}^n \frac{(x_i - x_{i-1})^2}{\lambda_i}\ .
$$
Is it true that the eigenvalues of $Q$ are $\lambda_i^{-1}$ ?
AI: The answer to your question is no. $Q$ can be written in the form
$$\begin{bmatrix}1 & -1 \\ & 1 & -1 \\ & & 1 & \ddots \\ & & & \ddots & -1 \\ & & & & 1 \end{bmatrix}
\begin{bmatrix}\lambda_1^{-1} \\ & \lambda_2^{-1} \\ & & \lambda_3^{-1} \\ & & & \ddots \\ & & & & \lambda_n^{-1} \end{bmatrix}
\begin{bmatrix}1 \\ -1 & 1 \\ & -1 & 1 \\ & & \ddots & \ddots \\ & & & -1 & 1 \end{bmatrix}
$$
Call that right-hand matrix $Z$. The eigenvalues would be $\lambda_i^{-1}$ if $Z^TZ=ZZ^T=I$. But $Z^TZ\neq I$. The eigenvalues are not $\lambda_i^{-1}$, and it's not hard to see this by trying a couple of numerical examples. |
H: Perimeter or Calculus Word problem
A rectangular plot of farmland will be bounded on one side by a river
and on the other three sides by a single strand of electric fence.
With 1400m of wire at your disposal, what is the largest area you can
enclose, and what are its dimensions?
Is there a way to solve this without Calculus? This sounds like a simple perimeter problem. If you have to use Calculus to solve this, how would I start? Could someone show me the steps?
AI: Let $r$ be the length of the rectangle along the river and $s$ be the length of the other side. Thus, we have to:
Max $rs$
Subject to:
$r + 2s = 1400$
Substitute for $r$ back into the objective function and we get the problem reformulated as:
Max $(1400-2s) s$
You can find the value of $s$ either via calculus or by completing the square. |
H: Prove that there isn't a polynomial with $\text {f(x)}^{13} = {(x-1)}^{143}+(x+1)^{2002}$
Prove that there isn't a polynomial with $\text {f(x)}^{13} = {(x-1)}^{143}+(x+1)^{2002}$
We can easily find out that $\text {deg}(f) = 154$
Then?
AI: Hint: Consider the coefficients of $x^0,x^1$ on both sides of the equation. This suffices. |
H: What is $\lim_{n \rightarrow \infty}\sum\limits_{k = 1}^n\frac{k}{n^2}$?
We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$
What is the limit of this function as $n \rightarrow \infty$?
My idea:
$$\dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + \dfrac{n}{n^2} = 0$$
Is this correct?
AI: HINT:
Such application of limit (esp. to $\infty$) to individual summand is applicable only when the number of summand is finite.
$$\dfrac{1+2+3+...+ \space n}{n^2} =\frac{\frac{n(n+1)}2}{n^2}=\frac12\cdot\left(1+\frac1n\right)$$ |
H: A question regarding the Poisson Process
I have the following question: Buses arrive at a city according to a Poisson process with a rate of 5 per hour. What is the probability that the fifth bus of the day arrives after midday given they start arriving at 9 a.m.
What I think is the correct way to go about this question is that I calculate:
$ P(N(0,3)\le4) $ The probability that less than or equal to four buses occur between 9am to midday. Because we have to reach the 4th bus in that time before the 5th bus can arrive. Provided I am correct with that notion the formula would look like: $ \sum_i_=_0^4 {15^i e^{-15}\over i!} $.
Thank you
AI: a short answer: yes your answer is correct. the number of arrival before 12 is distributed as Poisson (15), so you need this to be strictly smaller than 5. |
H: cardinality of a set with repeating elements?
What is the cardinality of a set which has repeating elements ?
For example $S = \{1,1,1,2,2\}$
Is each individual element counted?
Please quote a reference text if possible.
AI: The set $A = \{1,1,1,2,2\}$ is identical to the set $B = \{1, 2\}$ because one can show $A\subseteq B$ and $B\subseteq A$ and therefore $A=B$. Both have cardinality 2.
For example, in this answer I quoted Halmos:
The ordered pair of a and b… is the set $(a, b)$ defined by:
$$(a, b) = \{\{a\}, \{a,b\}\}.$$
…
We note first that if $a$ and $b$ happen to be equal, then the ordered pair
$(a, b)$ is the same as the singleton $\{\{a\}\}$.
This isn't precisely what you want, but it is a consequence of it: if $a$ and $b$ happen to be equal then the set $S = \{\{a\}, \{a,b\}\}$ is $\{\{a\}, \{a, a\}\}$, and the repeated $a$ in the $\{a,a\}$ element can be ignored, since $\{a,a\} = \{a\}$. Then $S = \{\{a\}, \{a\}\}$ and the repeated $\{a\}$ can be ignored, so $S$ is really just $\{\{a\}\}$.
That thread has a number of other references, particularly in this post by Brian Scott. |
H: When is an Integer a Rational Number, and are All Ratios Rational, Even $\frac{\sqrt{7}}{2}$?
$$\Bbb{Q} = \left\{\frac ab \mid \text{$a$ and $b$ are integers and $b \ne 0$} \right\}$$
In other words, a rational number is a number that can be written as one integer over another.
For an integer, the denominator is $1$ in that case. For example, $5$ can be written as $\dfrac 51$.
Is $5$ a rational number? Or is $\dfrac 51$ a rational number? I'm not able to figure out what the definition is actually saying. What are the numbers that cannot be written as one integer over another?
Irrational numbers are the numbers that cannot be written as one integer over another. Roots of numbers that are not perfect squares are examples of irrational numbers.
However, what is this then: $\dfrac {\sqrt 7} {2}$?
AI: Any number for which it is possible to express as the ratio or quotient of integers is a rational number. So yes, $5$ is rational, because it is possible to express this as $\frac 51, \frac {10}{2}...$.
$5$ is also an integer. Every integer is a rational number, but not all rational numbers, e.g. $\frac 12$, is an integer. We know $\frac 12$ is rational, because it is the quotient of two integers.
However, $\dfrac {\sqrt 7}{2}$ is not a ratio of integers. It is a ratio of a non-integer, namely $\sqrt 7$, over an integer. So $\dfrac{\sqrt 7}{2}$ is not rational. |
H: If $|f(z)| \leq |z|^2+\frac{1}{\sqrt{|z|}}$, show f is quadratic polynomial.
Suppose the function f is analytic in the punctured plane $z!=0$ (it means we excluded the zero) and satisfies the above condition, $|f(z)| \leq |z|^2+\frac{1}{\sqrt{|z|}}$, then show f is quadratic polynomial.
I think that if we multiply $z$ on the both side then we get $g(z)=zf(z)$ goes to 0 when z is going to 0. therefore, g is analytic function. Then what we should do to prove this?
AI: First, since $\lim_{z \to 0} z f(z) = 0$, we see that $f$ has a removable singularity at $z=0$. Hence we may as well assume that $f$ is analytic everywhere, and so has an entire power series expansion at $z=0$.
Second, if $|z| \ge 1$, we have $\frac{1}{\sqrt{|z|}} \le |z|^2$, hence $|f(z)| \le 2 |z|^2$, for $|z|\ge 1$.
Third, if we choose $R \ge 1$, and suppose $|z| = R$, then Cauchy's estimate gives $|f^{(k)}(0)| \le \frac{2 R^2}{R^k}$. Letting $R \to \infty$, it follows that if $k>2$, then $f^{(k)}(0) = 0$. Consequently $f(z) = f(0) + f'(0) z + \frac{1}{2} f''(0) z^2$. |
H: How to integrate $\int_0^\infty e^{-ty^2} \sin t dt$
My book suggests that I do some sort of limiting
$\lim_{A \to \infty} \int_0^A e^{-ty^2} \sin t d t$
But I'm not getting anywhere.
AI: Starting with the initial integral.
You can integrate it by parts.
$$\int_0^A e^{-ty^2} \sin t d t=-\int_0^A e^{-ty^2} d \cos(t)=-\left(e^{-ty^2}\cos(t)\bigg|_0^A -(-y^2)\int_0^A \cos(t)e^{-ty^2} dt \right)$$
$$\int_0^A e^{-ty^2} \sin t d t=-e^{-ty^2}\cos(t)\bigg|_0^A -y^2\int_0^A e^{-ty^2} d\sin(t)=-e^{-ty^2}\cos(t)\bigg|_0^A -y^2\left(e^{-ty^2}\sin(t)\bigg|_0^A -(-y^2)\int_0^A \sin(t)e^{-ty^2} dt \right) $$
$$\int_0^A e^{-ty^2} \sin t dt=1-e^{-Ay^2}(\cos(A)+y^2\sin(A))-y^4\int_0^A e^{-ty^2} \sin t dt$$
$$(1+y^4)\int_0^A e^{-ty^2} \sin t dt=1-e^{-Ay^2}(\cos(A)+y^2\sin(A))$$
$$\int_0^A e^{-ty^2} \sin t dt=\frac{1-e^{-Ay^2}(\cos(A)+y^2\sin(A))}{(1+y^4)}$$
And finally takong the limit one can obtain the answer:
$$\lim_{A \to \infty} \int_0^A e^{-ty^2} \sin t d t=\frac{\lim_{A \to \infty} (1-e^{-Ay^2}(\cos(A)+y^2\sin(A))) }{1+y^4}=\frac{1}{1+y^4}$$ |
H: Integrating a sine function that is to an odd power
I've started the chapter in my book where we begin to integrate trig functions, so bear in mind I've only got started and that I do not have a handle on more advanced techniques.
$\eqalign{
& \int {{{\sin }^3}x} dx \cr
& = \int {\sin x({{\sin }^2}x} )dx \cr
& = \int {\sin x({1 \over 2}} - {1 \over 2}\cos 2x)dx \cr
& = \int {{1 \over 2}\sin x(1 - \cos 2x)dx} \cr
& = \int {{1 \over 2}\sin x(1 - (2{{\cos }^2}x - 1))dx} \cr
& = \int {{1 \over 2}\sin x(2 - 2{{\cos }^2}x)dx} \cr
& = \int {\sin x - {{\cos }^2}} x\sin xdx \cr
& y = {1 \over 3}{\cos ^3}x - \cos x + C \cr} $
I got the right answer but it seems like an awfully long winded way of doing things, have I made things harder than they should be with this method?
AI: You have unnecessary middle steps, just use $\sin^2x = 1-\cos^2x$ after the second line.
To handle other odd powers:
$$\sin^{2k+1}x=(\sin^{2}x)^k(\sin x)=(1-\cos^2x)^k(\sin x)$$
then use $u=\cos x$. |
H: Can someone please explain $e$ in layman's term?
I never really understood what $e$ means and I'm always terrified when I see it in equations. What is it? Can somebody dumb it down for me? I know it's a constant. Is it as simple as that?
AI: The simplest way to understand it is, consider the following equation:
$$f(x)=\left(1+\frac{1}{x}\right)^x$$
As $x$ gets larger and larger, notice what number it gets closer to:
$$f(1)=2$$
$$f(2)=2.25$$
$$f(3)\approx 2.37073$$
$$...$$
$$f(1000)\approx 2.7169$$
$$...$$
It is important in mathematics because it is arguably the single real number most commonly found in nature (from heat flow to battery discharge to population growth/decay models), which is why its inverse is called the natural logarithm. |
H: Dagger category generated by $\mathsf{Set}$ viewed as a subcategory of $\mathsf{Rel}$.
Whenever a category $\mathcal{C}$ is being viewed a subcategory of a dagger category $\mathcal{D}$, define that the dagger category generated by $\mathcal{C}$ is the least subcategory of $\mathcal{D}$ that is closed under the dagger operation and includes $\mathcal{C}$.
Question. Viewing $\mathcal{C}=\mathsf{Set}$ as a subcategory of $\mathcal{D}=\mathsf{Rel}$, what is the dagger category generated by $\mathcal{C}$? I'm especially interested to see whether all of $\mathcal{D}$ is generated in this case.
AI: I think that indeed all of $\mathsf{Rel}$ is generated by the following argument:
Consider a relation $R\subseteq X\times Y$, then we have natural maps $i\colon R\to X\times Y$, $\pi_1\colon X\times Y\to X$ and $\pi_2\colon X\times Y\to Y$.
Define $R^\prime = (\pi_2\circ i)\circ(\pi_1\circ i)^\dagger$.
Now $x R^\prime y$ iff $\exists z\in R\colon x(\pi_2\circ i) z\mbox{ and }z(\pi_1\circ i)^\dagger y$, which is true iff $\exists z\in R\colon x = \pi_2(i(z))\mbox{ and }y=\pi_1(i(z))$, which in turn is equivalent to $(x,y)\in R$ or $xRy$.
Hence $R=R^\prime$.
Since $\pi_2\circ i$ is in $\mathsf{Set}$ and $(\pi_1\circ i)^\dagger$ is in $\mathsf{Set}^\dagger$, this proves that $R$ is in the dagger category generated by $\mathsf{Set}$. |
H: Path-connectedness and compactifications
Is the compactification of a path-connected space path-connected? Why or why not?
(I came across this question in my notes while studying for finals and I have no idea.)
AI: Let $\Gamma = \{(x, \sin(1/x)) \mid x>0 \}$ and $X=\Gamma \cup \{0\} \times [-1,1]$. Then $\Gamma$ is homeomorphic to $\mathbb{R}$ and $X$ is clearly a compactification of $\Gamma$. However, $X$ is not path connected. |
H: Integrate over the uniform distribution on the simplex
Let $p=(p_1,\ldots,p_n)$ correspond to points in a simplex that add up to one, i.e. $p$ is a discrete probability distribution. I would like to compute an integral of the form $\int dp_1\ldots\int dp_n\sum_{i=1}^np_if(p_i)$ with $p$ uniformly distributed on the $n-1$ dimensional simplex. My question is, how can I parameterize $p_i$ such that the integral covers the simplex uniformly?
AI: If $p$ is uniform on the simplex $\Delta_n=\{(p_i)_{1\leqslant i\leqslant n}\mid p_i\geqslant0,p_1+\cdots+p_n=1\}$, then each (continuous, for every $n\geqslant2$) random variable $p_i$ has density $(n-1)(1-x)^{n-2}\mathbf 1_{0\leqslant x\leqslant 1}$ with respect to the Lebesgue measure $\mathrm dx$. Hence
$$
\int_{\Delta_n}\sum_{i=1}^np_if(p_i)\mathrm d\sigma(p_1,\ldots,p_n)=n(n-1)\int_0^1f(x)x(1-x)^{n-2}\mathrm dx.
$$ |
H: Proving the quadratic formula (for dummies)
I have looked at this question, and also at this one, but I don't understand how the quadratic formula can change from $ax^2+bx+c=0$ to $x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$. I am not particularly good at maths, so can someone prove the quadratic formula in a simple way, with no complicated words? All help appreciated.
AI: Look at each step here:
$$
\begin{align*}
a x^2 + b x + c
&= 0 \\
a \left( x^2 + \frac{b}{a} x \right) + c
&= 0 \\
a \left( x^2 + \frac{b}{a} x + \frac{b^2}{4 a^2} \right) - \frac{b^2}{4 a} + c
&= 0 \\
a \left( x + \frac{b}{2 a} \right)^2
&= \frac{b^2}{4 a} - c \\
\left( x + \frac{b}{2 a} \right)^2
&= \frac{b^2 - 4 a c}{4 a^2} \\
x + \frac{b}{2 a}
&= \frac{\pm\sqrt{b^2 - 4 a c}}{2 a} \\
x &= \frac{-b \pm\sqrt{b^2 - 4 a c}}{2 a}
\end{align*}
$$ |
H: Are (semi)-simple Lie algebras not solvable?
Let $L\ne \{0\}$ be a non-abelian simple Lie algebra, then the only ideals are $L$ and $\{0\}$. We know that $L'$ is the smallest ideal of $L$ such that $L/L'$ is abelian. If $L'=\{0\}$, then $L/L'=L$, but $L$ is non-abelian. So $L'=L$ and hence $L^{(k)}=L\ne \{0\}$ for all $k$, i.e. $L$ is not solvable. So simplicity implies non-solvability.
My question is, can we extend this idea to semi-simple Lie algebras? In other words, is it true that semi-simplicity implies non-solvability?
My definition for semi-simple: $L$ has no non-zero solvable ideals.
AI: As noted in the comments, the answer I give may not coincide with the definitions/assumptions of the book/lecture course you are following.
Suppose $L$ is a non-abelian semi-simple Lie algebra. Then by definition of semi-simplicity (or it is a theorem depending on the approach you take) $L=\bigoplus_{i=1}^n L_i$ for some simple Lie algebras. And because $L$ is non-abelian at least one of the $L_i$ has to be non-abelian.
Because the argument is not really different for bigger $n$, let me for simplicity of notation assume that $L=L_1\oplus L_2$ with $L_1$ non-abelian (and we don't know whether $L_2$ is abelian or not). This means that $L=L_1\oplus L_2$ as vector spaces and additionally $[l_1,l_2]=[l_2,l_1]=0$ for all $l_1\in L_1$ and $l_2\in L_2$.
Now, by definition $L'=[L,L]=[L_1\oplus L_2,L_1\oplus L_2]=[L_1,L_1]+[L_1,L_2]+[L_2,L_1]+[L_2,L_2]$. The last equality follows as the Lie bracket is bilinear. But since we have a direct sum decomposition of Lie algebras, we have $[L_1,L_2]=[L_2,L_1]=0$. Hence $L'=L_1'\oplus L_2'$. Now since $L_1'$ is non-abelian simple, we have that $L_1'=L_1$. Thus $L^{(k)}=L_1\oplus L_2^{(k)}$. In particular, $L$ is not solvable. |
H: Integral, set and parametric representation
I am to compute the following: $\displaystyle\iiint\limits_V 1\, dx\, dy\, dz$,
where $V= \{{(x,y,z) \in \mathbb R^3 : (x-z)^2 +4y^2 < (1-z)^2} \text{ and } 0<z<1\}.$
Does anyone have idea what parametric representation should I take? I think there will be something elliptic, for instance $y=\frac{1}{2}r\sin(\alpha)$ and $z=z$, but I don't know what about $x$. Or maybe another parametric representation? Could you help me?
AI: Try
$$
u=x-z\quad
v=2y\quad
w=1-z
$$
New domain will be $\{(u,v,w)\in\mathbb{R}^3:u^2+v^2<w^2, 0<w<1\}$. Jacobian is also easy to compute. |
H: Write the expression $\log(\frac{x^3}{10y})$ in terms of $\log x$ and $\log y$
What is the answer for this? Write the expression in terms of $\log x$ and $\log y$ $$\log\left(\dfrac{x^3}{10y}\right)$$
This is what I got out of the equation so far. the alternate form assuming $x$ and $y$ are positive $$3\log(x)-\log(10y)$$ or maybe $$\log\left(\dfrac{x^3}{y}\right)-\log(5)=\log(2)$$?
I would appreciate a solution or an input thanks so much.
AI: As you have your answer I will simply give some help for future.
There are 3 basic(1,2 and 3) formula regarding logarithm:
1.$\ log_am+\ log_an=\ log_a(mn)$
2.$\ log_am-\ log_an=\ log_a(\dfrac {m}{n})$
3.$\ log_am^n=\ n\times\ log_a(m)$
4.$\ log_aa=1$
5.$\ log_ab\times \ log_na=\log_nb$
6.$\dfrac {\ log_am}{\ log_an}=\ log_nm$
and the basic which useful for solving log equation
$\log_{a}{b}=n\Longleftrightarrow a^n=b$
These can solve many problems regarding logarithm. |
H: Mathematical Symbol
In the following paper, what does the symbol $\Phi$ in equation $3.1$ (page $3$) represent? Does it represent the normal distribution?
AI: $\Phi(x)$ typically (and which is what is also means in the article you have linked to) represents a suitably normalized error function, equivalently the cummulative distribution function of the normal distribution, i.e.,
$$\Phi(x) = \dfrac1{\sqrt{2 \pi}}\int_{-\infty}^{x} \exp(-t^2/2)dt$$ |
H: Abelian categories with direct sums
Does any abelian category admits direct sums?
If not, categories admiting direct sums have a special name?
I'm asking this since I am writing a proof that requires direct sums but I only know that the category is abelian.
Thank you very much for your answer!
Edit. In the tag description is written that indeed they posses finite direct sums.
AI: Yes, by the definition of an Abelian category, it contains all finite coproducts ("direct sums"). Just look back to your definition of an Abelian product to see all the extra things an Abelian category is assumed to have. |
H: Solving for a matrix from its quadratic form
I have a set of vectors that I am trying to predict from another set of vectors using a matrix $W$. To find this matrix, I decide I want to minimize the $\ell^2$ norm of the error, e.g.:
$$
\text{find} \min_W \|y - Wx\|_2 \\
x,y \in \mathbb{C}^N \quad W\in \mathbb{C}^{N \times N}
$$
Where $x$ and $y$ are respective vectors from their sets, and each pair is (I hypothesize) related to eachother by $W$. I start by expanding this out:
$$
\min_W \left[ (y - Wx)^H (y - Wx) \right] \\
= \min_W \left[ (y - Wx)^H y - (y - Wx)^H Wx \right] \\
= \min_W \left[ (y^H y - y^H Wx)^H - (x^H W^H y - x^H W^H Wx)^H \right] \\
= \min_W \left[ y^H y - x^H W^H y - y^H W^H x + x^H W^H W x \right]
$$
Where $(\cdot)^H$ denotes Hermitian transpose. I want to take the derivative with respect to $W$ and set it equal to zero, but I'm having a hard time with the derivative as I'm not sure exactly how that works with Hermitian transposes. Taking a look at this page, it looks like I might be in trouble (e.g. I'm going about this the wrong way), but I wanted to pick your brains to see if you all had any ideas on how to move forward from here.
Thank you all!
EDIT
Michael C. Grant has pointed out that this question is underdetermined, and he is right, if I only have a single $x$ and $y$. However I have a set of $x$ and $y$ vectors that I assume are related to eachother by $W$ (And since right now I am working with simulations, I can generate as many pairs of $x$'s and $y$'s as I want). This is a kind of "system identification" problem, where I have input-output correspondences and I'm trying to understand how the math is derived.
AI: Yes, I'd suggest that you are in trouble, but not for the reason you may think. :-) You can certainly take the derivative of that expression with respect to $W$.
But more fundamentally, your problem is underdetermined: you have $2N$ complex quantities $x$ and $y$ that you are using to construct a matrix $W$ with $N^2$ complex quantities.
If $x\neq 0$, the minimum value of the norm is zero, and any matrix that satisfies $Wx=y$ is going to attain this minimum value. For instance, $W=(x^Hx)^{-1}yx^H$ will do the trick. But you can add any other matrix $V$ satisfying $Vx=0$ to $W$ and achieve the same result.
If $x=0$, then the minimum value of the norm is $\|y\|$, and any $W$ will achieve it.
EDIT: OK, now we see the problem. What you really want to do is something like this:
$$\begin{array}{ll}\text{minimize}_W & \| Y - W X \|_F\end{array}$$
where $X,Y\in\mathbb{C}^{N\times M}$, with $M\geq N$. This is actually a standard least squares problem, though it may not seem so since it's expressed in matrix form. Using Kronecker products, we can write this as
$$\begin{array}{ll}\text{minimize}_W & \| \mathop{\textrm{vec}}(Y) - (X^H\otimes I) \mathop{\textrm{vec}}(W) \|_2\end{array}$$
where $\otimes$ denotes the Kronecker product, and $\mathop{\textrm{vec}}$ stacks the columns of its matrix argument on top of one another. The normal equations for this least squares problem are
$$
\begin{aligned}
&(X^H\otimes I)^H (X^H\otimes I) \mathop{\textrm{vec}}(W) = (X^H\otimes I)^H \mathop{\textrm{vec}}(Y) \\
&\qquad \Longrightarrow\quad (X\otimes I) (X^H\otimes I) \mathop{\textrm{vec}}(W) = (X\otimes I) \mathop{\textrm{vec}}(Y) \\
&\qquad \Longrightarrow\quad (XX^H\otimes I) \mathop{\textrm{vec}}(W) = (X\otimes I) \mathop{\textrm{vec}}(Y) \\
&\qquad \Longrightarrow\quad WXX^H = YX^H
\end{aligned}
$$
Therefore, if $XX^H\in\mathbb{C}^{N\times N}$ is invertible---and note that's an important condition that is not guaranteed to be satisfied even if $M\geq N$---then the solution is
$$W = YX^H(XX^H)^{-1}$$
The similarity to the single-vector case is not too surprising. If $XX^H$ is singular, then $$W=YX^H(XX^H)^\dagger$$ is a minimizer, where $(XX^H)^\dagger$ is the pseudoinverse of $XX^H$. |
H: How to find the limit of this function
We have the function $$\dfrac{\sqrt{n^4 + 100}}{4n}$$
I think the best method is by dividing by $n$, but I have no idea what that yields, mainly because of the square root.
AI: $\text{Note that $\sqrt{n^4+100} > \sqrt{n^4}$. Hence, we have}$
$$\dfrac{\sqrt{n^4+100}}{4n} > \dfrac{\sqrt{n^4}}{4n} = \dfrac{n^2}{4n} = \dfrac{n}4$$
$\text{I trust you can finish it from here.}$ |
H: Definition of Induced Module - Typo in Corps Locaux?
This is from the beginning of the section on group cohomology in Corps Locaux (English Edition).
Serre states that $A$ is an induced $G$-module if
(1) $A\cong A\otimes_\mathbb{Z}X$ for an abelian group $X$,
or, equivalently,
(2) $A=\bigoplus_{s\in G}s\cdot X$.
Is (1) a typo? This is a very strict condition not just on $A$, but also on $X$. It seems to me that the correct version of the first definition above should read $A\cong A\otimes_{\mathbb{Z}[X]}X$, which is in line with the usual corresponding notion for group representatinos over fields, but perhaps I'm just missing something simple.
AI: Condition (1) says nothing about $G$. It should be something like
(1) $A \cong X \otimes_{\Bbb Z} \Bbb Z[G]$ for some abelian group $X$.
We assume that the $G$-action on $X \otimes_{\Bbb Z} \Bbb Z[G]$ acts only on the $\Bbb Z[G]$ factor. |
H: Bifurcation value and description
Find the bifurcation of $a$ and describe the bifurcation that take place at each value
$\displaystyle dy/dt=e^{-y^2}+a$
I let $\displaystyle e^{-y^2}+a=0$ then solve for y. I got $y^2=-\ln(a)$ What do I do next to find $a$?
AI: You started well to let $e^{-y^2} + a =0$ and then solve for $y$. This will allow us to determine the equilibrium points. You will arrive at $$y^2 = -\ln(-a).$$ (I believe you misplaced a negative sign.) From this we see that first of all, we must have $a < 0$. In order to find bifurcation points, we need to consider what values of $a$ will yield a change in the nature of the equilibrium points. That is, what value of $a$ will cause a change in the number or behavior of the equilibrium points? Is there a value of $a$ that will cause no solution to the equation?
I leave this to you, but please ask a question if you get stuck. |
H: singularity of analytic continuation of $f(z) = \sum_{n=1}^\infty \frac{z^n}{n^2}$
How to show that all possible collection of analytic continuations of $\displaystyle f(z) = \sum_{n=1}^\infty \frac{z^n}{n^2} $ has singular point at $z = 1$. I know that $f(z)$ converges for $|z| \le 1$. Also is there a theorem that relates the singularity of analytic continuation with circle of convergence?
AI: Note that when $|z|<1$,
$$f'(z)=\sum_{n=1}^\infty\frac{z^{n-1}}{n}=-\frac{\log(1-z)}{z}.\tag{1}$$
Since the right hand side of $(1)$ has a unique singularity at $z=1$, it implies that on the unit circle, the analytic continuation of $f$ has a unique singularity at $z=1$.
For the general situation, note that for one complex variable, any (nonempty) open set in $\mathbb{C}$ is a domain of holomorphy, which implies that for any closed subset $S$ of the unit circle, there exists a holomorphic function $f$ defined on the unit disk, such that the collection of sigularities of the analytic continuation of $f$ on the unit circle is precisely $S$.
Remark: Just in case, $(1)$ follows from integrating
$$(zf'(z))'=\sum_{n=0}^\infty z^n=\frac{1}{1-z}.$$ |
H: Terminology for an element of a partition?
Suppose I'm dividing some region $\Theta \in \mathbb{R}^n$ into subregions $\theta_i, i=1,2,3$ such that $\theta_i \cap \theta_j = \varnothing, i\ne j$ and $\bigcup_i \theta_i = \Theta$. I might say (perhaps loosely, even technically incorrectly) that I am partitioning the region $\Theta$.
Thus, a "partition" would be a particular configuration of $\{\theta_1,\theta_2,\theta_3\}$ that satisfies the aforementioned conditions. But what would an element of a partition be referred to as?
Since I am partitioning a "region", it makes sense to say that I am partitioning a region into "subregions", but at a higher level, what is a correct term for an element of a partition?
AI: The terms cells, or classes, or blocks, even parts of a partition are often used to describe the "sub-regions" of a given partition, depending on the nature and/or context of the partition.
See for example, Partition of a Set. |
H: Counting couples of numbers
I have no trouble believing that, if $|n| \leq J$, then $$\#\{ (j_1,j_2) \in \{ 1,...,J \} \, | \, j_1-j_2 = n \} = J-|n|,$$
but can anyone explain it a little more formally?
Thank you in advance for any help.
AI: We can restrict to the case $n \ge 0$, since if $n \le 0$ then
$$\{ (j_1,j_2) \in \{ 1,...,J \} \, | \, j_1-j_2 = n \} =
\{ (j_2,j_1) \in \{ 1,...,J \} \, | \, j_2-j_1 = -n \},$$
with $-n \ge 0$.
Write
$$S = \{ (j_1,j_2) \in \{ 1,...,J \} \, | \, j_1-j_2 = n \},$$
with $n \ge 0$.
Now if $(j_1, j_2) \in S$, we have $j_1 = n + j_2 > n$. Conversely, if $J \ge j_1 > n$, then take $j_2 = j_1 - n \in \{ 1, \dots , J \}$, and $(j_1, j_2)\in S$.
So there are as many pairs in $S$ as the $J - n$ values of $j_1$ from $n+1$ to $J$. |
H: can we divide by any term when we have an differential homogeneous equation?
I am asking because i think we divided by x here for whatever reason since the other side is equal to 0 and it wont affect the equation in any meaningful way.
Letting $y=ux$ we have
$$\begin{align}
(x-ux) dx + x(udx + x du) &= 0 \\
dx+ x du &= 0\\
\frac{dx}x+du&=0\\
\ln |x| + u &= c\\
x\ln |x|+ y &= cx.
\end{align}$$
I got $x/2 = y + c$ instead
The thing I don't understand though is how this is legal, because if it is it means there is an infinite number of solutions possible as we could also multiply both sides by anything.
AI: Yes, provided that term is non-zero.
Notice that after division by $x$, and a non-explicit integration, we had a $\ln |x|$ term which, just like $\frac{1}{x}$, is not well-defined when $x=0$.
Notice that several lines have been missed out. Going from the third to the fourth line, we had:
\begin{array}{ccc}
\frac{\operatorname{d}\!x}{x} + \operatorname{d}\!u &=& 0 \\ \\
\frac{\operatorname{d}\!x}{x} &=& -\operatorname{d}\!u \\ \\
\int \frac{\operatorname{d}\!x}{x} &=& -\int \operatorname{d}\!u \\ \\
\ln|x| &=& -u + c \\ \\
\ln|x| + u &=& c
\end{array}
The original question was in terms of $y$ and $y$ was swapped for $ux$. If $y=ux$ then $u=\frac{y}{x}$, so lets make the swap:
$$\ln|x| + \frac{y}{x} = c$$.
Since $x \neq 0$, we can multiply through by $x$ to give:
$$x\ln|x| + y = cx$$ |
H: Remainders problem
What will be the reminder if $23^{23}+ 15^{23}$ is divided by $19$?
Someone did this way:
$15/19 = -4$ remainder and $23/19 = 4$ remainder So $(-4^{23}) + (4^{23}) =0$ but i didn't understand it
AI: $$\forall\,\text{prime}\;p\;\wedge\;\forall\,a\in\Bbb Z\;,\;(a,p)=1\implies a^{p-1}=1\pmod p$$
$$23=4\pmod{19}\;\wedge\;23^{19}= 4^{19}=4\pmod{19}\;,\\\;15=-4\pmod{19}\implies 15^{19}=15=-4\pmod {19}$$
Thus (all the following is onde modulo $\,19\,$ ):
$$23^{23}+15^{23}=4^5+(-4)^5=0$$ |
H: Definition of continuous map
Let $X,Y$ be topological spaces, let $f:X\longrightarrow Y$ a function. There are several (equivalent) ways to define continuity of $f$, one of these says: $f$ is continuous if
$$(1)\qquad \forall A\subseteq X,\ \forall x\in X, \Big(x\in \overline{A}\Rightarrow f(x)\in\overline{f(A)}\Big)$$
where $\overline{A}$ denotes the closure of $A$ in $X$, and similarly for $f(A)$ in $Y$.
I was wondering if reversing this condition, i.e.
$$(2)\qquad \forall A\subseteq X,\ \forall x\in X, \Big(f(x)\in \overline{f(A)}\Rightarrow x\in\overline{A}\Big)$$
also defines a continuous map. So, i'm asking if $(1)$ and $(2)$ are equivalent, if not, i'm looking for a counterexample, i.e. a continuous map $f$ not satisfying $(2)$.
Any help appreciated!
AI: Non-injective functions are prone to this.
For example, $f\colon\mathbb R\to\mathbb R$, $x\mapsto 0$ is continuous, but does not have $(2)$, e.g. with $A=[0,1]$ and $x=2$.
On the other hand, the function $x\mapsto x$ from a set with at least two elements endowed with the indiscrete topology to the same set endowed with discrete topology has property $(2)$, but is not continuous. |
H: Function Spaces
What is exactly the difference between $L^2$ space and ${\ell}^1$ space? I believe that one of them is the space of square of square integrable functions.
Does it have to do with one is for series and other for integration?
Thank You.
AI: $L^2(\Bbb R)$ is the space of square-integrable real functions:
$$L^2(\Bbb R) = \left\{ f: \Bbb R \longrightarrow \Bbb R \mid \int_{-\infty}^\infty |f(x)|^2 \, dx < \infty \right\}.$$
Note that the above is not quite right; we say two functions $f, g \in L^2(\Bbb R)$ are equivalent if they take the same values outside of a set of measure zero.
$\ell^1(\Bbb R)$, on the other hand, is the space of absolutely convergent series:
$$\ell^1(\Bbb R) = \left\{ \{a_n\}_{n = 1}^\infty \mid \sum_{n = 1}^\infty |a_n| < \infty \right\}.$$
In general,
$$L^p(\Bbb R) = \left\{ f: \Bbb R \longrightarrow \Bbb R \mid \int_{-\infty}^\infty |f(x)|^p \, dx < \infty \right\}$$
for $p \geq 1$ where we identify $f, g \in L^p(\Bbb R)$ if they agree outside of a set of measure zero, and
$$\ell^p(\Bbb R) = \left\{ \{a_n\}_{n = 1}^\infty \mid \sum_{n = 1}^\infty |a_n|^p < \infty \right\}.$$
If you learn measure theory at some point, you will see that there is a unifying definition of $L^p$ spaces that include the above examples as special cases: $L^p(\Bbb R)$ will be the $L^p$ space associated to $\Bbb R$ with the Lebesgue measure, and $\ell^p(\Bbb R)$ will be the $L^p$ space associated to $\Bbb N$ with the counting measure. |
H: Test for polynomial reducibility with binary coefficients
I'm learning about Galois Fields, in particular $GF(2^8)$, as they are applied to things like the AES algorithm and Reed-Solomon codes. Each of these rely on an irreducible 8th degree polynomial with binary coefficients to serve as a modulus for generating the particular field instance. For example AES uses $x^8+x^4+x^3+x^1+x^0$.
Is there a test I can apply to an 8th degree polynomial with binary coefficients I am presented with to determine whether or not it is irreducible?
AI: A non-irreducible polynomial $p(x)$ of degree eight has an irreducible factor of degree $\le 4$. This suggest the following test. Check divisibility by all of $x$, $x+1$, $x^2+x+1$, $x^3+x+1$, $x^3+x^2+1$, $x^4+x+1$, $x^4+x^3+1$ and $x^4+x^3+x^2+x+1$. That list contains all the irreducible polynomials of degree $\le 4$ with coefficients in $GF(2)$. If your polynomial is not divisible by any of these, it is irreducible.
That is a little bit of work. Of course testing divisibility by either $x$ or $x+1$ is trivial. A polynomial is divisible by $x$, iff its constant term is zero. And a binary polynomial is divisible by $x+1$, iff it has an even number of terms. The remaining six are a bit trickier. Often checking for divisibility by $x^2+x+1$ is aided by the observation that $x^3+1=(x+1)(x^2+x+1)$ is divisible, hence so are all binomials of the form $x^\ell+x^{\ell+3}$. This allows you to replace a high degree term with a lower degree term. Similarly the cubic irreducible ones are both factors of $x^7+1$, but that won't be nearly as useful in the calculations. |
H: Why are Haar measures finite on compact sets?
I'm working through the answer by t.b. to another user's question here:
A net version of dominated convergence?
because I am trying to work through a related problem and I think it will be illuminating.
From step two, "Then $KK'$ is compact and thus has finite Haar measure."
why is this true?
$\bf{\text{My attempt so far}}$:
Let $\mu$ be a left Haar measure on a locally compact group $G$. Then $\mu$ is non-zero, regular, and left $G$-invariant. Let $K\subset G$ be compact.
$\color{red}{(!)}$ If I could construct set $U$ with the properties:
(i) $1\in U$
(ii) $0 < \mu(U) < \infty$
(iii) $U$ is open.
Then I could cover $K$ with $\{xU : x\in K\}$ and choose finitely many $k_{1}, ... , k_{n}\in K$ such that $K\subset \bigcup_{j=1}^{n}xU$ which forces $\mu(K)\leq \sum\limits_{j=1}^{n}\mu(xU) = n\mu(U) < \infty$.
I don't know how to construct such a $U$, though it seems innocent enough to ask for.
Does such a $U$ exist? or is the proof much more difficult?
AI: A Haar measure is finite on compact sets by definition: A Borel measure $\mu$ on a topological group $G$ is a (left) Haar measure if
$\mu$ is regular.
$\mu$ is left-invariant, i.e. $\mu(xU) = \mu(U)$ for any measurable $U \subset G$.
$\mu(U) > 0$ for all $U \subset G$ open.
$\mu(K) < \infty$ for all $K \subset H$ compact. |
H: Is any norm on $\mathbb R^n$ invariant with respect to componentwise absolute value?
Given $\mathbf{x}=(x_1,...,x_n) \in \mathbb{R}^n$ , define $ \mathbf{x}'=(|x_1|,...,|x_n|) $ .
Then, is it $||\mathbf{x}'|| = ||\mathbf{x}||$ for every norm on $ \mathbb{R}^n $ ?
NB: The answer is trivially yes for $p$-norms.
AI: You may consider, for example,
$$\|(x_1,x_2)\|:=|x_1|+|x_1-x_2|.$$ |
H: How to integrate $\int_0^\infty \frac{1}{1+y^4} dy$
I tried the trigonometric substitution $y^2 = \tan \theta, sec^2\theta = 1 + y^4$
But now I'm stuck with $\frac12 \int \frac{\sqrt{\sin \theta}}{(\cos\theta)^{\frac92} } d \theta$
I ran out of imagination as what to try now
AI: Note that
$$\int_1^{\infty} \dfrac{dy}{1+y^4} = \int_0^1 \dfrac{y^2dy}{1+y^4}$$
Hence,
$$I=\int_0^{\infty} \dfrac{dy}{1+y^4} = \int_0^1 \dfrac{1+y^2}{1+y^4}dy$$
We have $y^4+1 = (y^2+1+y\sqrt2)(y^2+1-y\sqrt2)$. Hence,
$$1+y^2= \dfrac{(y^2+1+y\sqrt2) + (y^2+1-y\sqrt2)}2$$
Hence, we get that
\begin{align}
I & = \dfrac12\int_0^1\dfrac{dy}{1+y^2-y\sqrt2} + \dfrac12\int_0^1\dfrac{dy}{1+y^2+y\sqrt2}\\
& = \dfrac12\int_0^1\dfrac{dy}{\left(y-\dfrac1{\sqrt2}\right)^2+\left(\dfrac1{\sqrt2} \right)^2} + \dfrac12\int_0^1\dfrac{dy}{\left(y+\dfrac1{\sqrt2}\right)^2+\left(\dfrac1{\sqrt2} \right)^2}\\
& = \dfrac1{\sqrt2} \left(\arctan \left(y\sqrt2-1\right) + \arctan \left(y\sqrt2+1\right) \right)_{y=0}^1\\
& = \dfrac1{\sqrt2} \left(\arctan(\sqrt2 - 1)+\arctan(\sqrt2+1)\right) = \dfrac1{\sqrt2} \left(\arctan\left(\dfrac1{1+\sqrt2} \right)+\arctan(1+\sqrt2)\right)\\
& = \dfrac1{\sqrt2} \times \dfrac{\pi}2 = \dfrac{\pi}{2 \sqrt2}
\end{align} |
H: how many empty sets are there?
Would I be correct in saying that in the category of sets, the "class of sets that are isomorphic to the empty set is a proper class"?
In other words, there are LOTS of initial objects in the category of sets, but they're all related to each other via unique isomorphisms?
Similarly, for any category $C$, if $X$ is an object of $C$, then the full subcategory of objects isomorphic to $X$ is a proper class (ie, there are LOTS of them)?
EDIT: Let me elaborate a bit. I'm not entirely sure if I should be considering ZFC or ETCS, etc. I'm asking this question from the point of view of algebraic geometry and its use of category theory. I mean, I know people say that the class of say, all elliptic curves over $\mathbb{C}$ is a proper class, but the class of $\mathbb{C}$-isomorphism classes of elliptic curves is a set. That would seem to imply that every isomorphism class is a proper class, and hence this would seem to say that the isomorphism class (ie, bijection class) of the empty set in the category of sets should also be a proper class, right?
Lana's answer with the extensionality axiom seems to make sense, in that there is only ONE empty set, so that the isomorphism class of the empty set is a singleton set. On the other hand, the isomorphism class of a set with 1 element should be a proper class right?
Generalizing a bit, would it be true that the isomorphism class of any initial (or terminal) object of a category be a singleton set? It seems to me that the isomorphism class of, say $\text{Spec }\mathbb{Z}$ in the category of schemes is a proper class, simply because there are many different presentations of rings isomorphic to $\mathbb{Z}$?
will
AI: Your conclusion is not entirely correct. While it's perfectly fine to have a context where a category has many initial objects which are isomorphic (but not equal), there is a delicate point to the inaccurate statement:
the class of say, all elliptic curves over $\mathbb{C}$ is a proper class, but the class of $\mathbb{C}$-isomorphism classes of elliptic curves is a set. That would seem to imply that every isomorphism class is a proper class
The fact that there is a proper class of different structure is quite trivial to show, because we can always change one element with another and get a proper class of different, but isomorphic, structures that way.
To say that there is only set-many isomorphism classes is also inaccurate, because the isomorphism classes are proper classes, there is no collection of the form $\{A\mid A\text{ is an isomorphism class of ...}\}$. Only sets can be members of other sets. But it does mean something else, it means that there is a set of pairwise non-isomorphic structures that any other structure is isomorphic to one of them. That is, there is a class of representatives which is a set.
Lastly, it does not mean that every equivalence class is a proper class, just that at least one of the equivalence classes are.
The simplest example of this is equinumerousity (in $\sf ZFC$). There is a proper class of sets of every cardinality, except for the class of sets of cardinality zero. There is only one of those.
If your foundational theory is $\sf ZFC$ or some related theory then there exists only one empty set. If your foundational theory is some category based theory which allows many empty sets, but requires them to be isomorphic, then this is a different case altogether. |
H: The number of words that can be made by permuting the letters of _MATHEMATICS_ is
The number of words that can be made by permuting the letters of MATHEMATICS is
$1) 5040$
$2) 11!$
$3) 8!$
$4) 4989600$
First of all I do not understand the statement of the problem, I would like if some one tell me with an example.
AI: To understand the question we assume a simpler word say BALL and see how many permutations of the word exist.
BALL can be permuted as ABLL, ALBL, ALLB, BALL, BLAL, BLLA, LABL, LALB, LBAL, LBLA, LLAB and LLBA. A total of $12$ possible permutations.
The permutations of a word is given by $$\frac{\text{(Total number of alphabets)}!}{\text{(Repetitions of A)}!\text{(Repetitions of B)}!\text{(Repetitions of C)}!...\text{(Repetitions of Z)}!}$$
The number of words that can be made by permuting the letters of MATHEMATICS is
$$\frac{11!}{2!2!2!}=4989600$$ |
H: Half order derivative of $ {1 \over 1-x }$
I'm new to this "fractional derivative" concept and try, using wikipedia, to solve a problem with the half-derivative of the zeta at zero, in this instance with the help of the zeta's Laurent-expansion.
Part of this fiddling is now to find the half-derivative $$ {d^{1/2}\over dx^{1/2}}{1 \over 1-x}$$
First I would like to understand, whether there is a short/closed form for this ot whether I have to express the fraction as a power series first and then to differentiate termwise.
Next I would like to know the value at $x=0$.
AI: Use what you know about whole number derivatives. Inductively, you can prove $$\frac{d^n}{dx^n}\frac1{1-x}=\frac{n!}{(1-x)^{n+1}}$$ Now express $n!$ using the $\Gamma$ function ($\Gamma(n+1)$), and you can extend the definition to non-integral $n$: $$\frac{d^{1/2}}{dx^{1/2}}\frac1{1-x}=\frac{\Gamma(3/2)}{(1-x)^{3/2}}$$ At $x=0$, this is just $\Gamma(3/2)$.
To confirm that this method works, observe that you can also inductively prove $$\begin{align}\frac{d^n}{dx^n}\frac1{(1-x)^{3/2}}&=\frac{\frac{(2n+1)!}{4^n\cdot n!}}{(1-x)^{n+3/2}}\\&=\frac{\frac{\Gamma(2n+2)}{4^n\Gamma(n+1)}}{(1-x)^{n+3/2}}\end{align}$$ and extend to nonintegral $n$, so that $$\begin{align}\frac{d^{1/2}}{dx^{1/2}}\frac{d^{1/2}}{dx^{1/2}}\frac1{1-x}&=\frac{d^{1/2}}{dx^{1/2}}\frac{\Gamma(3/2)}{(1-x)^{3/2}}\\&=\frac{\Gamma(3/2)\frac{\Gamma(3)}{2\Gamma(3/2)}}{(1-x)^{2}}\\&=\frac{\frac{2!}{2}}{(1-x)^{2}}\\&=\frac{1}{(1-x)^2}\\&=\frac{d}{dx}\frac{1}{1-x}\end{align}$$ and all is as it should be. |
H: Should $\mathbb{N}$ contain $0$?
This is a classical question, that has led to many a heated argument:
Should the symbol $\mathbb{N}$ stand for $0,1,2,3,\dots$ or $1,2,3,\dots$?
It is immediately obvious that the question is not quite well posed. This convention, as many others, are not carved in stone, and there is nothing to prevent mathematician $A$ define $\mathbb{N}$ to be the positive integers including $0$, and mathematician $B$ to define $\mathbb{N}$ to be the positive integers excluding $0$. It does not seem that one definition is accepted widely enough for it to be "the right definition", and even if this was the case, the fashion might change in the future.
I am, however, hoping that there might be a semi-mathematical reason to prefer one notion over the other. For example, I have spent much of my mathematical life believing that $0 \in \mathbb{N}$ because: 1) morally, $\mathbb{N}$ is the cardinalities of finite sets 2) the empty set is a set with $0$ elements. However, recently I realised that this reasoning applies to $\omega$ rather than $\mathbb{N}$, and - much to my horror - I saw $\omega$ and $\mathbb{N}$ used side by side with the only distinction being that $0 \in \omega$ while $0 \not \in \mathbb{N}$. For another example, $\mathbb{N}$ seems to be a much nicer semigroup if $0 \not \in \mathbb{N}$ (and in any case, adding $0$ to a semigroup is a more natural operation than removing it), which would an argument for taking $0 \not \in \mathbb{N}$.
The arguments mentioned above are, of course, rather weak, but perhaps just enough to tip the scale. In any case, this is the general type of argument I am looking for.
Question: Does there exists a convincing argument for deciding if $0 \in \mathbb{N}$ ?
(I consider it quite possible that the answer is negative because in some context one convention is preferable, and in other context the other one. The problem could be dismissed by using $\mathbb{Z}_+$ (or even $\mathbb{Z}_{>0}$ and $\mathbb{Z}_{\geq 0}$) to avoid confusion, but note that in some context one definitely does not want to do this. I would be interested in an argument that is universal in the sense that it makes overall mathematical landscape more elegant, and does not spoil any detail too much. I do not hope that the argument would be convincing to every mathematician, especially one working in a very specific and narrow area.)
AI: Let me ask a similar question in rebuttal:
Are finite sets countable?
The answer is simple. It depends on the context. Sometimes it's easier to have finite sets included in the definition of "countable" and sometimes it's easier to have just the finite set, and have "at most countable" for the term which includes finite sets as well.
I will give an argument why $\Bbb N$ should include $0$, though.
One can consider $0$ and $1$ as the basic atoms of the numbers we know. $\Bbb N$ is the set generated by $0,1$ using addition; and then $\Bbb Z$ is generated by adding additive inverses, and $\Bbb Q$ and then $\Bbb R$ and $\Bbb C$.
Of course that if you take your atomic set of numbers to be $\Bbb C$ or something else, then it might as well be redundant, but it's still a reasonable argument. With only some naive set theory, and axioms for addition and multiplication, we can create all the numbers we need! That's an incredible thing. And all just form the assumption that $0$ and $1$ exist.
On the other hand, in analysis it's often more convenient to have $0\notin\Bbb N$. For example when we say that $x^n$ is well defined for every $x\in\Bbb R$ and $n\in\Bbb N$. Or if we often talk about the sequence $\frac1n$, then we find it easier to write $\frac1n$ for $n\in\Bbb N$, rather than adding "...and $n>0$". |
H: Proof $||A \underline x|| > 0 \Leftrightarrow \underline x \neq \underline 0$
If $n \geq m, A \in M(n,m)$ and $rg(A)=m$
Proof $||A \underline x|| > 0 \Leftrightarrow \underline x \neq \underline 0$
a)If $m \neq 0 \leftrightarrow A \neq 0_M$
Suppose that $\underline x = \underline 0 \Leftrightarrow ||A \cdot \underline 0||=||0||$
which is a contradiction to $||A \underline x|| > 0 $
therefore $||A \underline x|| > 0 \Leftrightarrow \underline x \neq \underline 0$
b)If $m \neq 0 \Leftrightarrow A \neq 0_M \Leftrightarrow A > 0$
per definition:
$||A|| > \frac{||A \underline x||}{|| \underline x||}$ and $ \forall \underline x \neq \underline 0$
then $||A \underline x|| > 0 \Leftrightarrow \underline x \neq \underline 0$
$\Box$
Is any of these ways a correct proof?
If not where are the mistakes or what can I improve?
AI: A rigorous proof would be as follows:
a matrix $A\in M(n,m)$ has rank $m$ (i.e., maximal rank) if and only if it represents an injective homomorphism from a vector space of dimension $m$ to a vector space of dimension $n$. So, if you start with a non-zero vector $\underline x$, its image $A\underline x$ will remain different from zero, thus $||A\underline x||>0$. It is also clear that $A\underline 0=\underline 0$. \\\ |
H: Remainder when $20^{15} + 16^{18}$ is divided by 17
What is the reminder, when $20^{15} + 16^{18}$ is divided by 17.
I'm asking the similar question because I have little confusions in MOD.
If you use mod then please elaborate that for beginner.
Thanks in advance.
AI: The key thing to remember about the operation "mod" is that it behaves "well" with respect to product (hence powers), and of course addition. This means that if you can simplify your life a lot distributing the calculation into many steps and taking "mod" at each stage.
To compute $20^{15}$, you can first notice that $20 \pmod{17} = 3$. Then $20^{15} \pmod{17} = 3^{15} \pmod{17}$. The power $15$ is quite large, but you can for instance take: $3^3 \pmod{17} = 27 \pmod{17} = 10$, hence $ 3^{15} \pmod{17} = 10^5 \pmod{17} = 10 \cdot 100^2 \pmod{17} = 10 \cdot (-2)^2 \pmod{17} = 40 \pmod{17} = 6 \pmod{17}$.
The term $16^{18}$ is much easier: $16^{18} \pmod{17} = (-1)^{18} \pmod{17} = 1$.
Hence, the answer is $6 + 1 = 7$. |
H: How to find the number of positive devisors of $50,000$
How to find the number of positive devisors of $50,000$, I would like to know that what mathematical formulae I need to use here as it is a big number to calculate mentally, I am sorry to ask if this is too silly question here to ask. Thank you.
AI: Write the prime factorization
$$
n = p_1^{e_1} p_2^{e_2} \cdots p_r^{e_r}.
$$
Notice that there is a one-to-one correspondence between positive integer factors of $n$ and $r$-tuples of integers $(d_1, d_2, \dots, d_r)$ such that $0 \le d_i \le e_i$ for each $i$. Explicitly,
$$
k = p_1^{d_1} p_2^{d_2} \cdots p_r^{d_r} \longmapsto (d_1, d_2, \dots, d_r).
$$
There are $e_i + 1$ possible powers of $p_i$ for each $i$. Thus, the total number of divisors, $\sigma_0$ is given by
$$
\sigma_0(n) = (e_1 + 1)(e_2 + 1)\cdots(e_r + 1).
$$
For your example,
$$
50\,000 = 2^4 \cdot 5^5,
$$
so
$$
\sigma_0(50\,000) = (4 + 1)(5 + 1) = 30.
$$ |
H: Find the inverse for arbitrary k
I need to find a, b, c, d, e, f, g, h (all of which are not zero)
such that for all k is in Real number, show A is invertible or this can't happen
$$A = \left(\begin{array}{ccc}
a&b&c\\d&k&e\\f&g&h
\end{array}\;\begin{array}{c}\end{array}\right)$$
My answer was this can't happen, but I don't know what the answer is.
And I couldn't provide a complete proof to support my answer.
I assume that a, ~, h are all 1. Then use det and element row operation I got 0 determinant for all k.
And the general determinant for this with Sarrus's rule is
det(A) = k(ah-cf) + g(cd - ae) + b(ef - dh)
What should I do next? This is invertible or non-invertible?
Thanks,
AI: It is possible to choose $a,b,c,d,e,f,g,h$ such that the determinant is non-zero for all $k$, i.e., the matrix is invertible for all $k$.
Note that as you have $$\det(A) = k(ah-cf) + g( cd-ae) + b(ef-dh)$$
First lets cut the dependence on $k$ in the determinant, i.e., set $ah-cf = 0$, i.e., $ah=cf$. Hence, now the determinant is (eliminating $a$)
$$\det(A) = g\left(cd-\dfrac{cfe}h\right) + b(ef-dh) = - \dfrac{(dh-ef)(bh-cg)}h$$
Now choose $b,c,d,e,f,g,h$ such that the determinant is non-zero. |
H: Quadrature formula
How can we find a quadrature formula $\int_{-1}^1 f(x) dx=c \displaystyle \sum_{i=0}^{2}f(x_i)$ that is exact for all quadratic polynomials?
Thanks for help.
AI: Write:
$$\int_{-1}^{1} f(x)\ dx =\int_{-1}^{1} ax^2+bx+c\ dx =\frac{2a}{3}+2c$$
Moreover you want to have three points all having the same weights. To make life simpler, let's also assume symmetry:
$$w\left(f(-x_0)+f(0)+f(x_0)\right) =\frac{2a}{3}+2c$$
$$w\left(2ax_0^2+3c\right)=\frac{2a}{3}+2c$$
It's clear that $w$ must be $2/3$, and so:
$$x_0^2 = 1/2 \to x_0=1/\sqrt{2}$$
All in all you have:
$$\int_{-1}^{1} f(x)\ dx \sim \frac{2}{3}\left(f(-1/\sqrt{2})+f(0)+f(1/\sqrt{2}) \right)$$
Edit: You can of course choose any three points and still get exact results, but this makes the calculation far less elegant. |
H: Advocating base 12 number system
I had a calculus professor who suggested we should be using base 12 number system. What are the advantages of using such a system?
AI: As I see it, there are two advantages. First, it's not too different from base 10, so it comes fairly naturally. Second, 12 has many divisors, so $1/2, 1/3, 1/4, 1/6, 1/8, 1/9, 1/12,\ldots$ would all be terminating decimals. |
H: Field of fractions of a finite $\mathbb{Z}$-module is finite extension of $\mathbb{Q}$
Let $A$ be a ring which is also a finitely generated $\mathbb{Z}$-module. If $A$ is an integral domain and $K$ is its field of fractions and $K$ has characteristic zero, then why is $K$ a finite dimensional vector space over $\mathbb{Q}$?
I see that $K$ contains $\mathbb{Q}$ but why is the extension finite?
AI: Let $a_1,\ldots,a_n$ generate $A$ as a $\mathbf{Z}$-module, and let $F=\mathbf{Q}(a_1,\ldots,a_n)\subseteq K$. Note that $\mathbf{Q}(a_1,\ldots,a_n)$ contains $A$, so in fact we must have $\mathbf{Q}(a_1,\ldots,a_n)=K$. Now, because $A$ is finite over $\mathbf{Z}$, it is in particular integral over $\mathbf{Z}$, which means that every element of $A$ is the root of a monic polynomial with coefficients in $\mathbf{Z}$. In particular, this is true of the $a_i$, and so each $a_i$ is algebraic over $\mathbf{Q}$ (integrality over a field is the same as algebraicity). So $K$ is generated over $\mathbf{Q}$ by finitely many algebraic elements, and therefore is a finite extension of $\mathbf{Q}$. |
H: How to work with random variables?
If $X$ and $Y$ are independent random variables described by standard normal distribution could you please explain how to formally evaluate probabilities of occurrences such as $X-Y>0$ (intuitively it's $0.5$ of course) or $X^2-Y^2>0$?
Ultimately I'd like to be able to tell how likely is it that parabola $z(t)=X^2+2Yt+t^2$ has a root in positive semi-axis of $t$.
AI: If $X$ and $Y$ are jointly normal (and not necessarily independent), then $X-Y$ is a normal random variable. More generally, $aX+bY$ is a normal random variable with mean $a\mu_X+b\mu_Y$ and variance $a^2\sigma_X^2+b^2\sigma_Y^2+2ab\rho\sigma_X\sigma_Y$ where $\rho$ is the correlation coefficient. For your special case, $X-Y \sim \mathcal N(0,2)$ and so, with $\Phi(\cdot)$ denoting the cumulative probability density function of the standard normal random variable,
$$P\{X-Y > 0\} = 1-\Phi\left(\frac{0}{\sqrt{2}}\right) = 1 - \frac{1}{2} = \frac{1}{2}$$ just as you intuited, presumably via the symmetry argument that
since $X$ and $Y$ are independent identically distributed random variables,
then $P\{X<Y\}=P\{X>Y\}$. But the formal approach will work
in more general cases too. For example, more generally,
$$P\{aX+bY \leq c\} = \Phi\left(\frac{c-(a\mu_X+b\mu_Y)}{\sqrt{a^2\sigma_X^2+b^2\sigma_Y^2+2ab\rho\sigma_X\sigma_Y}}\right).$$
Finding $P\{X^2-Y^2 < 0\}$ requires more computation in the general case,
but for the case when $X$ and $Y$ are independent identically distributed
normal random variables, $X^2$ and $Y^2$ also are independent random
variables with identical (though non-normal) distribution, and so
$$\{X^2-Y^2 < 0\} = \frac{1}{2}$$ by symmetry. Note that it is
not necessary to
find the distributions of $X^2$ and $Y^2$ and nor is integration
of the joint density over a region necessary. An alternative
calculation is to note that $\{X^2 - Y^2 < 0\}$ if and only if
$X+Y$ and $X-Y$ have opposite signs, and since $X+Y$ and $X-Y$
happen to be independent $\mathcal N(0,2)$ random variables,
the probability is
$\frac{1}{2}\times\frac{1}{2}+\frac{1}{2}\times\frac{1}{2} = \frac{1}{2}$.
With regard to your third question, the parabola $z = X^2+2Yt+t^2$ has
value $X^2 \geq 0$ at $t=0$. If the slope at $t=0$, $\left.\frac{dz}{dt}\right|_{t=0} = 2Y+2t\bigr|_{t=0} = 2Y $ is also positive, then the parabola will not cross
the positive $t$ axis. So, $Y < 0$ is a necessary condition for getting
a crossing on the positive $t$ axis. Also, $z$ has a minimum value
of $X^2-Y^2$ at $t = -Y$. Thus, the parabola crosses the positive $t$
axis (twice) exactly when $\{Y < 0\}$ and $\{X^2-Y^2 < 0\}$. The probability
is thus $\frac{1}{4}$ via the circular symmetry of the joint density of
two independent $\mathcal N(0,1)$ random variables. |
H: How do we know which component belongs to which part in a separable differential equation
Take for instance dP/dt = kP
We get after separating: dP/P = kdt, but why shouldn't it be
dP/kP = dt instead, mathematically it doesn't make sense to say that k must belong absolutely to the right hand side of the equation.
AI: Who said $k$ has to be on the right hand side? It is a constant, so it can be on either side. The important point is that anything in terms of $P$ is on the side with $dP$ and everything in terms of $t$ is on the side with $dt$. Since a constant is independent of $P$ and $t$, it may go on either side.
Let's solve $\dfrac{dP}{dt} = kP$ both ways to illustrate why it doesn't matter.
First we separate the equation as
$$\frac{dP}{P} = k \, dt.$$
Integrating both sides, we get
$$\ln |P| = kt + C.$$
Exponentiating both sides, we get the solution
$$P = e^{kt + C} = Ae^{kt},$$
where $A$ is an arbitrary constant.
Now separate the equation as
$$\frac{dP}{kP} = dt.$$
Integrating both sides, we get
$$\frac{1}{k} \ln |P| = t + C.$$
Multiplying both side by $k$ results in
$$\ln |P| = kt + C',$$
where $C' = kC$. Since $C$ is an arbitrary constant, this is an irrelevant detail. Now exponentiating both sides gives
$$P = e^{kt + C'} = Ae^{kt},$$
where $A$ is an arbitrary constant.
So both methods give the same general solution. The reason for this is that we can factor out a constant from an integral, so the side that the constant is on doesn't affect the integration.
The reason every book will leave the $k$ on the right-hand side for this problem is because we want a solution for $P$; we'll just have to move $k$ back to the right-hand side after integrating if we start with it on the left-hand side with $P$. |
H: $f(g(x))=x$ implies $f(x)=g^{-1}(x)$
Is it possible to find a necessary and sufficient condition to conclude when
$$f(g(x))=x \implies f(x)=g^{-1}(x) \wedge f^{-1}(x)=g(x),$$
if both functions are well defined?
AI: If either $f$ is injective or $g$ is surjective, then $f\circ g={\rm id}$ implies $\exists f^{-1},g^{-1}$ then necessarily $f^{-1}=g$ and $g^{-1}=f$. |
H: Projecting a surface segment of a cone onto a 2D plane?
Firstly, I'd like to apologise - I do not know the correct terms for what I am asking.
Assume that the top/bottom of the highlighted portion there is actually aligned with the base.
To help explain:
I need to wrap that section of the cone using a piece of paper. What shape (exactly) do I need to cut out from said paper so that it will wrap flawlessly?
AI: If I've understood your problem correctly, I think this should help. This one's a right circular cone and its opened up paper version. Here we've taken an arbitrary curved surface of the form shown on the cone and visualized. |
H: relationship of polar unit vectors to rectangular
I'm looking at page 16 of Fleisch's Student's Guide to Vectors and Tensors. The author is talking about the relationship between the unit vector in 2D rectangular vs polar coordinate systems. They give these equations:
\begin{align}\hat{r} &= \cos(\theta)\hat{i} + \sin(\theta)\hat{j}\\
\hat{\theta} &= -\sin(\theta)\hat{i} + \cos(\theta)\hat{j}\end{align}
I'm just not getting it. I understand how, in rectangular coordinates, $x = r \cos(\theta)$, but the unit vectors are just not computing.
AI: The symbols on the left side of those equations don't make any sense. If you wanted to change to a new pair of coordinates $(\hat{u}, \hat{v})$ by rotating through an angle $\theta$, then you would have
$$
\left\{\begin{align}
\hat{u} &= (\cos \theta) \hat{\imath} + (\sin \theta)\hat{\jmath} \\
\hat{v} &= (-\sin \theta) \hat{\imath} + (\cos \theta)\hat{\jmath}.
\end{align}\right.
$$ |
H: Show that if some nontrivial linear combination of vectors $\vec{u}$ and $\vec{v}$ is $\vec{0}$, then $\vec{u}$ and $\vec{v}$ are parallel.
I've never been that great at writing proofs, but I'm getting a bit better. I think I have the answer correct, but I don't know if I'm missing anything. My logic seems right but there may be some minute detail that I'm leaving out. Can anybody give any feedback on this? Thanks.
$\vec{0}$ being a nontrivial linear combination of $\vec{u}$ and $\vec{v}$ implies that there exists a non-zero $a$ or $b$ such that $a\vec{u}=-b\vec{v}$. Without loss of generality, assume $a\neq 0$. Then divide by $a$ and the equality holds: $\vec{u}=-\frac{b}{a}\vec{v}$. And since $-\frac{b}{a}\vec{v}$ is a scalar multiple of $\vec{u}$, it remains that $\vec{u}$ and $\vec{v}$ are parallel.
More rigorous proof:
\begin{align*}
\vec{0}=a\vec{u}+b\vec{v}&\Longrightarrow a\neq 0\vee b\neq 0&&\text{Given}\\
&\Longrightarrow a\vec{u}=-b\vec{v}\\
&\Longrightarrow \vec{u}=-\frac{b}{a}\vec{v}&&\text{WLOG assume $a\neq 0$}\\
&\Longrightarrow \vec{u}\text{ and }\vec{v}\text{ are parallel.}
\end{align*}
AI: Almost perfect.
But, we don't have $a\ne 0$ and $b\ne 0$, only $a\ne 0$ or $b\ne 0$. So, one side might be $0$, but then the other is again parallel to it. |
H: Preimages of a function: Is the following proposition true or false?
Let $g: ℤ \times ℤ → ℤ \times ℤ$ be defined by $g(m,n) = (2m, m – n)$.
Is the following proposition true or false? Justify your conclusion.
For each $(s, t) ∈ ℤ \times ℤ$, there exists an $(m, n) ∈ ℤ \times ℤ$ such that $g(m,n) = (s, t)$.
I understand the question, and I believe it is true. But I have no idea of what the steps are on how to prove it is true. I guess I should not have skipped the past couple of days. Thanks for any help.
AI: Hint: the first coordinate of the image point $(2m,m-n)$ is always even.
What happens if $s$ is odd? |
H: Special numbers in patterns and the reasons they are special
I know there are several big list questions out there (e.g. Patterns that break down at certain numbers) that touch on classifications of mathematical structures where certain numbers don't fit in, but I'd like to find more explanations on why these numbers are special.
For instance, all automorphisms of symmetric groups are conjugacies, except for $n=6$, where the conjugacies form a subgroup of index two. The reason 6 is special is that the number of partitions of six distinct objects into two sets of three is the same as the number of partitions into three sets of two.
As another example, 3-manifold groups are special because in higher dimensions any group can be a manifold group (by Dehn surgery), but we can still embed all graphs into all 3-manifolds (which makes them more interesting than 2-manifolds).
Some special numbers I'd like to see explanations for are:
Fermat's last theorem: where does the proof break down for $n=2$? (I know there are obvious counterexamples, but why do the techniques used require $n>2$)?
Exotic smooth structures in dimension 4.
The finitely many quadratic extensions of the rationals that are not Euclidean or ufd's, etc.
AI: There's a question over at MathOverflow in this vein, focusing on why characteristic $2$ is so special. I think that you might like some of the answers there. |
H: Finding the x value after a matrix multiplication?
I have the following solution of a problem, and I was wondering about a hopefully quite simple thing in it:
I was wondering how do they get from [5,10,5] to 5x? I am pretty sure there is a simple explaination for that.
Thanks in advance..
AI: Because $5\left[\begin{smallmatrix}1\\2\\1\end{smallmatrix}\right]=\left[\begin{smallmatrix}5\\10\\5\end{smallmatrix}\right]$.
I wanted to put this in the comments, but it wouldn't compile properly. |
H: Differentiate $\log_{10}x$
My attempt:
$\eqalign{
& \log_{10}x = {{\ln x} \over {\ln 10}} \cr
& u = \ln x \cr
& v = \ln 10 \cr
& {{du} \over {dx}} = {1 \over x} \cr
& {{dv} \over {dx}} = 0 \cr
& {v^2} = {(\ln10)^2} \cr
& {{dy} \over {dx}} = {{\left( {{{\ln 10} \over x}} \right)} \over {2\ln 10}} = {{\ln10} \over x} \times {1 \over {2\ln 10}} = {1 \over {2x}} \cr} $
The right answer is: ${{dy} \over {dx}} = {1 \over {x\ln 10}}$ , where did I go wrong?
Thanks!
AI: You made one small mistake when you looked at $v^2$. Note that: $$(\ln {v})^2\ne 2\ln(v)$$That would only be true if the expression was: $$\ln\bigl(v^2\bigr)$$ |
H: Inverse Laplace transform of the function: $F(s)=e^{-a\sqrt{s(s+r)}}$
I would like to find inverse Laplace transform of the function: $$F(s)=e^{-a\sqrt{s(s+b)}}$$
which $a$ and $b$ are positive real numbers and $s$ is a complex variable. It would be appreciated if someone can help me.
AI: See my solution to this problem. The analysis is identical, save for the denominator.
$$\oint_C ds \,e^{-a \sqrt{s (s+b)}}\, e^{s t}$$
where $C$ is the following contour pictured below:
This one's a bit odd because we are removing the branch point singularities by subtracting the inner contour $C_i$ from the outer contour $C_o$, i.e. $C=C_o+C_i$.
The integral about $C$ consists of the integral about vertical path (that ultimately becomes the ILT) and the integral about the outer arc. We expect that this latter integral vanishes as the radius of the arc gets larger; this only happens when $a \lt t$ as shown below, by letting $s=R e^{i \phi}$; this integral over this arc is, for large $R$:
$$\left| \int_{\pi/2}^{3 \pi/2} d\phi \,e^{R (t-a) \cos{\phi}} \right|$$
Note that $\cos{\phi} \lt 0$ over the integration region, so that the integral only vanishes for $a \lt t$ when the contour opens to the left.
This leaves the integral over $C_i$ which consists of integrals above and below the real axis, between the branch points, and integrals about small circular arcs about the branch points. These latter integrals vanish in the limit of the radii of the arcs going to zero. Thus, we are left with
$$\int_{c-i \infty}^{c+i \infty} ds e^{-a \sqrt{s (s+b)}}\, e^{s t} - \underbrace{ \int_0^{b} dx\, e^{i a \sqrt{x (b-x)}}\, e^{-x t}}_{s=e^{-i \pi} x} + \underbrace{ \int_0^{b} dx\, e^{-i a \sqrt{x (b-x)}}\, e^{-x t}}_{s=e^{i \pi} x} = 0$$
We set the sum of the integrals to zero because of Cauchy's theorem, i.e., there are no poles inside the contour $C$. The second integral is the integral over the line below the real axis; the third is the integral over the line above the real axis. Note that the phase transition was continuous from $-\pi$ to $\pi$ because we did not cross the branch cut; we did not see this explicitly because that information was included in the integrals about the branch points which vanished.
We do a little algebra and we have a real-valued integral for the ILT:
$$\frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \, e^{-a \sqrt{s (s+b)}} e^{s t} = \frac{1}{\pi} \int_0^b dx \, \sin{\left [ a \sqrt{x (b-x)}\right]} e^{-x t}$$
We make similar substitutions as with the other solution and we find the RHS is equal to
$$\begin{align}\frac{b}{2 \pi} e^{- b t/2} \int_0^{\pi} d\phi \, \sin{\phi} \, \sin{\left[\frac12 a b \sin{\phi}\right]} \, e^{b t/2 \cos{\phi}}\\ = -e^{-b t/2} \frac{1}{\pi} \frac{\partial}{\partial a} \int_0^{\pi} d\phi \, \cos{\left[\frac12 a b \sin{\phi}\right]} \, e^{b t/2 \cos{\phi}}\\ = -e^{-b t/2} \frac{\partial}{\partial a} I_0\left( \frac{b}{2} \sqrt{t^2-a^2}\right)\end{align}$$
Note again that the contour is only closed to the left when $t>a$; when $t<a$ it is closed to the right. Therefore the ILT of $e^{-a \sqrt{s (s+b)}}$ is
$$\frac{a b}{2} e^{-b t/2} \left(t^2-a^2\right)^{-1/2} \, I_1\left(\frac{b}{2}\sqrt{t^2-a^2}\right) H(t-a)$$ |
H: Relationship between 2 Dimensional Quadratic systems and roots
Given four points
$(x_1, y_1)
(x_2, y_2)
(x_3, y_3)
(x_4, y_4)$
How does one construct a system of two equations:
$a_1x + a_2x^2 + a_3y + a_4y^2 + a_5xy = c_1$
$b_1x + b_2x^2 + b_3y + b_4y^2 + b_5xy = c_2$
such that the set of solutions of this system is the four original points?
Solving the general systems is growing massively complicated.
AI: Here's one way to do it, at least when the points are in a general position.
Call the four given points $P_1, P_2, P_3, P_4$. Let $\ell_{ij}$ denote the line connecting $P_i$ and $P_j$, and $L_{ij}$ a linear equation in $x$ and $y$ satisfied by the points on $\ell_{ij}$. Then you can use the system of equations:
\begin{equation}
L_{12} L_{34} = 0, \\
L_{13} L_{24} = 0.
\end{equation}
Indeed, it is easy to see that each of the four points $P_i$ satisfy both equations, so they lie on the intersection. Provided none of the lines are identical (the condition that the points are in general position), then there are exactly four solutions to this system of equations, which you've already found. |
H: Better than Runge-Kutta-Fehlberg 4(5) at high order?
I wonder what are currently the best numerical solvers of ODE for high-accuracy computations. I need an efficient and accurate method to solve ODE that are not pathological (all is smooth) using $128$-bit floating-point number (quad arithmetics).
So, what would be good choices (some references comparing different algorithms would be appreciated) ?
AI: Do you something about the nature of your differential equation? One that involves exponential forms may be very efficiently approximated using exponentials in a way similar to Taylor series...
Utilizing logarithms, xlog(x)... And mixtures of different elementary functions may ask prove useful too... Constructing the method isn't significantly difficult you just need to use regression/basic slope and sum analysis to fit a particular type of function to your curve, rinse and repeat with higher powers.
I would recommend this approach of customizing your own approximation method, especially, if you know something about the nature/behavior of your solution (which you appear to do since you can confidently say its smooth) and want high performance as well as high accuracy. |
H: The set of complex numbers of modulus $1$ is a group under multiplication
Show that $C=\{z\in \mathbb{C} \mid |z|=1\}$ is a group under complex multiplication.
I'm a little confused because isn't the identity the only element with order $1$? What is this set?
AI: Hint: prove that if you multiply two unitary complex numbers then the result is also an unitary number.
I suggest you to learn about polar representation of a complex number. This could make the solution easier. |
H: Necessary and sufficient condition for an Euler trail between two vertices
Graph theory question one of those obvious math proofs so its going to be a pain to prove.
Show that G=(V,E) has an Euler trail between (different) vertices u and v if and only if G is connect and all vertices except u and v are of even degree.
By obviousness if there all other vertices are of even degree then you can make a circuit from v to a1 to a2 to an eventually you will come back to u or v if you continue doing this you eventually will arrive at u or v with no way to leave thus Euler trail.
But how would you actually prove this?
AI: Method 1:
Hint: Use the extremal principle. Consider the longest path.
Method 2:
Hint: Induct on the number of paths.
Method 3:
Hint: Connect $u$ with $v$, and apply the proof of Euler circuit. |
H: Spivak problem on Schwarz inequality
I have a question regarding problem 19 in the 3rd Ed. of Spivak's Calculus. Specifically, part (a). The question concerns the Schwarz inequality:
$$
x_1y_1 + x_2y_2 \leq \sqrt{x_1^2+x_2^2}\sqrt{y_1^2+y_2^2} \ .
$$
It says to prove that if $x_1=\lambda y_1$ and $x_2 = \lambda y_2$ for some number $\lambda$, then equality holds in the Schwarz inequality.
Substituting the given values for $x_1$ and $x_2$ we have
$$
\lambda (y_1^2+ y_2^2) \leq |\lambda|(y_1^2+y_2^2) \ .
$$
It appears to me that equality can only hold if $\lambda \geq 0$. Can someone explain to me how equality holds for any given $\lambda$?
AI: That is a typo. You need $\lambda\ge 0$. |
H: Why is $\{n=4r+1,r = {n-1\over 2}\}\subset \mathbb{P}$ true under these conditions?
Let $p=p_k$, $q=p_{k+1}$ and $r=p_{k+2}$, where $p_m$ denotes the $m$th prime.
I conjecture that whenever $n$ is prime, where $n$ is defined as follows:
$$n = 1+\left(\left\lfloor{p\over q}\right\rfloor+r\right)
\left\lfloor{(p+r)(q^2+pr)\over(pqr)}\right\rfloor$$
then:
$$ \frac{n-1}{4} \qquad\text{and}\qquad 4r+1$$ are both prime.
Although I've tried vigorously, I have no explanation as to why $n=4r+1,r = {n-1\over 2}$, although I heavily suspect that the solution rests on twin primes.
Alternative form:
$$n = 1+ z \left \lfloor{x\over y}+{z\over y}+{y\over z}+{y\over x}\right\rfloor+\left \lfloor{x\over y}\right\rfloor \left \lfloor{x\over y}+{z\over y}+{y\over z}+{y\over x}\right\rfloor$$
Example:
Terms $1\to 5$, where $t_m$ denotes the $m$th number $k$ which satisfies the conditions described in this question:
$$t_1 = 29 = 1+\left(\left\lfloor{3\over 5}\right\rfloor+7\right)\left\lfloor{(3+7)(5^2+3\cdot 7)\over(3\cdot 5\cdot 7)}\right\rfloor$$
$$\frac{29-1}{4} = 7,\qquad 4\cdot 7+1 = 29$$
$$t_2 = 53 = 1+\left(\left\lfloor{7\over 11}\right\rfloor+13\right)\left\lfloor{(7+13)(11^2+7\cdot 13)\over(7\cdot 11\cdot 13)}\right\rfloor$$
$$\frac{53-1}{4} = 13,\qquad 4\cdot 13+1 = 53$$
$$t_3 = 149 = 1+\left(\left\lfloor{29\over 31}\right\rfloor+37\right)\left\lfloor{(29+37)(31^2+29\cdot 37)\over(29\cdot 31\cdot 37)}\right\rfloor$$
$$\frac{149-1}{4} = 37,\qquad 4\cdot 37+1 = 149$$
$$t_4 = 173 = 1+\left(\left\lfloor{37\over 41}\right\rfloor+43\right)\left\lfloor{(37+43)(41^2+37\cdot 43)\over(37\cdot 41\cdot 43)}\right\rfloor$$
$$\frac{173-1}{4} = 43,\qquad 4\cdot 43+1 = 173$$
$$t_5 = 269 = 1+\left(\left\lfloor{59\over 61}\right\rfloor+67\right)\left\lfloor{(59+67)(61^2+59\cdot 67)\over(59\cdot 61\cdot 67)}\right\rfloor$$
$$\frac{269-1}{4} = 67,\qquad 4\cdot 67+1 = 269$$
AI: Note that $\lfloor \frac{p}{q} \rfloor=0$, since $0<\frac{p}{q}<1$. Thus $n=1+r\lfloor \frac{(p+r)(q^2+pr)}{pqr} \rfloor=1+r \lfloor \frac{q}{r}+\frac{r}{q}+\frac{p}{q}+\frac{q}{p} \rfloor$.
Note that for $1<x<2$, we have $2<x+\frac{1}{x}<\frac{5}{2}$, since $x+\frac{1}{x}>2 \Leftrightarrow x^2+1>2 \Leftrightarrow (x-1)^2>0$ and $x+\frac{1}{x}<\frac{5}{2} \Leftrightarrow 2x^2+2<5x \Leftrightarrow (2x-1)(x-2)<0$.
We clearly have $p<q<r$, and by Bertrand's postulate $q<2p, r<2q$, so $1<\frac{r}{q}, \frac{q}{p}<2$, so by above we have $4<\frac{q}{r}+\frac{r}{q}+\frac{p}{q}+\frac{q}{p}<5$, so $n=1+r \lfloor \frac{q}{r}+\frac{r}{q}+\frac{p}{q}+\frac{q}{p} \rfloor=1+4r$.
It is now straightforward to see that when $n$ is prime, $\frac{n-1}{4}=r$ and $4r+1=n$ are prime. |
H: How to place a limit that it's inside the integral, outside.
I did this:
$$\int_{1}^t x^{-1}dx=\int_{1}^t\lim_{n\rightarrow -1}{x^n}dx =\lim_{n\rightarrow -1}\int_{1}^t{x^n}dx $$ just to have a way to approximate $\ln t$. $$\ln{t}=\lim_{h\rightarrow 0}\frac{x^{h}-1}{h}$$
The second expression may be correct, but I was told I cannot say $\int_a^b \lim_{n\rightarrow -1}x^{n}dx=\lim_{n\rightarrow -1}\int_a^b x^n dx$ without previously prooved some statements. If so, what are those statement and where can I find information about this?
AI: In general, you cannot interchange two limits. In this case, recall that the integration is a actually a limit, for instance, the limit of a Riemann sum. Hence, you need to be careful, in general:
$$\int_a^b \lim_{n \to \infty} f_n(x) dx \neq \lim_{n \to \infty} \int_a^b f_n(x) dx$$
You may want to read through the following posts to figure out under what conditions, you can swap two limits.
When can the order of limit and integral be exchanged?
Can a limit of an integral be moved inside the integral?
When can we exchange order of two limits? |
H: How do I solve for $dy/dx$ if $y=\ln (\sin x+\ln x)$?
Solve for $\frac{dy}{dx}$ if $y=\ln(\sin x+\ln x)$.
I know how to solve for integrals involving $du$ and $u$, but how do I do this type of problem (I think it's the opposite of the integral problem)?
AI: Use the "good old" chain rule: remember that?
$$y = \ln(f(x)) \iff y' = \dfrac{f'(x)}{f(x)}$$
$$y = \ln(\sin x + \ln x) \implies y' = \dfrac{\left(\cos x + \frac 1x\right)}{\sin x + \ln x}$$ |
H: To prove that $2^{3n}+2^n +1$ is not a perfect square.
Question: Prove that $2^{3n} + 2^n + 1$ cannot be a perfect square for any natural $n$.
I attempted this question and failed in two different ways.
1) I considered a polynomial $p(x) = x^3+ x + 1 - m^2$ (for some natural $m$) and factorized the polynomial assuming $2^n$ is a root. I, then, tried substituting some numbers to get a contradiction in divisibility (since $x - 2^n$ is a factor). Alas, all of that was in vain.
2) I wrote $2^{3n} + 2^n + 1 = m^2$ and thus $(m-1)(m+1) = 2^n(2^{2n} + 1)$. So one can conclude that if $xy = 2^{2n} + 1$ such that $(x,y) = 1$, then one of the $m-1$ or $m+1$ must be $2^{n-1}x$ and the other must be $2y$ (this follows from the fact that $m$ is odd and $(m-1,m+1) = 2$). I am stuck at this point.
I would appreciate a hint in either direction. I am hoping that 1) will work out. It will teach me a new proofing (err... proving) technique.
Thanks!
AI: Hint: Consider 2 cases. If $n$ is odd, take $\pmod{3}$. If $n$ is even, bound between 2 squares. |
H: What is $(\operatorname{monad}(0), \leq)$ isomorphic to?
Suppose, given $\epsilon \in \operatorname{monad}(0)$ and $\epsilon \neq 0$, is it true for each $x \in \operatorname{monad}(0)$, $$x = \sum_{r_i \in s \subset \Bbb R, s \text{ is finite}}a_i \epsilon^{r_i}$$
$a_i \in \Bbb{R}$ is a constant coefficient.
Let $S$ be a subset of $\Bbb {R^{\Bbb R}}$ such that for all $f \in S$, $f(x)= 0$ for all but finite elements in $\Bbb R$
If so, it seems to me $(\operatorname{monad}(0), \leq)$ is isomorphic to $S$ with a lexicographic order.
AI: Suppose that $\epsilon$ is a positive infinitesimal. In the ultrapower construction let $\langle x_k:k\in\omega\rangle$ represent $\epsilon$; I’ll denote this relationship by $\epsilon=\langle x_k:k\in\omega\rangle_{\mathscr{U}}$. Without loss of generality we may assume that $x_k>0$ for all $k\in\omega$. Then $\epsilon^n=\langle x_k^n:k\in\omega\rangle_{\mathscr{U}}$ for each $n\in\omega$. For $n\in\omega$ let
$$y_n=\frac12\min\left\{x_k^n:k\le n\right\}\;;$$
then for each $n\in\omega$ we have $y_k<x_k^n$ for all $k\ge n$, so if $\delta=\langle y_k:k\in\omega\rangle_{\mathscr{U}}$, then $\delta<\epsilon^n$ for all $n\in\omega$. Thus, $\delta$ is smaller than any finite real combination of real powers of $\epsilon$: it’s infinitesimal compared with $\epsilon$. |
H: Demonstration using the Pigonhole principle
I was thinking about the following problem:
Let $n\in\mathbb N$ be odd. If I have a symmetric matrix in $M_n(\mathbb{N})$, i.e. a square symmetric matrix of size $n$, for which each column and each row consists of all numbers between $1$ and $n$, then the diagonal consists of all the numbers from $1$ to $n$.
I want to prove this using the pigeonhole principle. Thanks for helping me.
AI: You don't use the pigeonhole principle, but the double counting principle.
Note that since the matrix is symmetric, if an entry appears in the strictly upper right triangle, it must appear again in the strictly lower left triangle, hence appear an even number of times.
The only way for a number to appear an odd number of times, is for it to appear an odd number of times on the diagonal. Hence, each number must appear at least once on the diagonal. |
H: Integrating $\int{\frac{1}{1+e^{x}}}dx$, Partial Fractions(?)
I need help with this integral:
$$H(x) = \int{\frac{1}{1+e^{x}}}dx$$
It should be easy, but I'm stuck. I thought about using a u-substitution but I didn't get any further. Am I meant to use partial fractions? I'm not yet very comfortable with partial fractions. I'd be thankful for someone's explanation!
AI: Let $e^x = t$. We then have
$$\int \dfrac{dx}{1+e^x} = \int \dfrac{e^xdx}{e^x+e^{2x}} = \int \dfrac{dt}{t+t^2}$$
Equivalently, let $e^{-x} = t$, we then have
$$\int \dfrac{dx}{1+e^x} = \int \dfrac{e^{-x}dx}{1+e^{-x}} = \int \dfrac{-dt}{1+t}$$
I trust you can take it forward using both/either of the above substitutions. |
H: Why does $7^{2\ln x}\cdot \ln(7) \cdot (2/x)$ equal to $7^{2\ln x}\cdot \ln(49) /x$?
While reviewing, I came upon this problem which has the derivative
$7^{2\ln x}\cdot \ln(7) \cdot (2/x)$
simplified to
$7^{2\ln x}\cdot \ln(49) /x$
How/why is it simplified like that?
AI: The simplification relied on the following fact:
If $a>0$, then $2\log(a) = \log(a^2)$.
In your case, the $2 \log(7)$ got converted to $\log(7^2) = \log(49)$. The rest of the terms remain as such. |
H: Multiples of one number in base-$10$
How can I prove that all the natural numbers has one multiple in base-$10$ such that this numbers is written just with zeros and ones?
For example, let $n=3$ then, exists al least one number, the $111$, such that is in base-10 and is written just using zeros and ones, in this case just ones or let $n=5$, in this case exists the $10$, that is multiple of $5$ and is written just using zeros and ones.
I must use the Pigeonhole principle to prove this fact.
AI: Given a number $n$, recall that a number divided by $n$ can have $n$ possible remainders.
Now consider the $n$ numbers
$$1; 11; 111; \cdots; \underbrace{11\ldots1}_{n\,\, 1\text{'s}}$$
If one of the above leaves a remainder $0$, we are done. If none of them leave a remainder $0$, by pigeonhole principle, since we have $n$ numbers and $n-1$ remainders (recall none of them have a remainder zero), two of them leave the same remainder. Subtract the smaller from the larger to get what you want.
As an example, let us see for $n=6$. Consider the numbers,
$$1; 11; 111; 1111; 11111; 111111$$
Note that none of them leave a remainder $0$.
$1$ leaves a remainder $1$.
$11$ leaves a remainder $5$.
$111$ leaves a remainder $3$.
$1111$ leaves a remainder $1$.
$11111$ leaves a remainder $5$.
$111111$ leaves a remainder $3$.
Since $1$ and $1111$ leave the same remainder, subtract $1$ from $1111$ to get $1110$ and clearly $6 \vert 1110$. |
H: Counterexample to upper continuity
Let $M$ be a $\sigma$-algebra of subsets of a set $X$ and let $\mu:M\rightarrow[0,\infty)$ be a finitely additive set function. I'm trying to decide if it's automatically true that for all ascending chains $\{A_k\}$ in $M$:
$$\mu\big(\bigcup_{k=1}^{\infty}A_k\big)=\lim_{k\rightarrow\infty}\mu(A_k)$$
I mean we clearly have $\mu\big(\bigcup_{k=1}^{n}A_k\big)=\mu(A_n)$. These are two identical sequences of real numbers, how can their limits be different? Thus I can see no way for the above not to be true, it doesn't even require finite additivity. But the above statement along with finite additivity implies that $\mu$ is a measure, which would then force one to conclude that finite additivity implies countable additivity, which I don't think is supposed to be true; although the only counterexample to this I could find was one which involved ultrafilters, and don't be bringin' those ultrafilters round here boy, or we're gonna have strong words.
AI: Certainly $\mu(\cup_{k=1}^n A_k) = \mu(A_n)$ because $\cup_{k=1}^n A_k = A_n$. This is true for any set function $\mu$.
This question features a countably additive set-function into $[0, \infty]$ which is not upper continuous. Consider the measure space $(\mathbb{N}, 2^\mathbb{N},\mu)$ where $\mu$ is the counting measure. Let $A_n = \{k \in \mathbb{N}: k \geq n\}$. Then $\bigcap_{1}^\infty A_n = \emptyset$ so that $\mu(\bigcap_1^\infty A_n) = 0$ while $\mu(A_n) = \infty$ for any $n$. |
H: What is the relation between graded modules and finitely generated modules
The reason I ask this question is I found two different statements about Hilbert's syzygy theorem from Jacobson's Basic Algebras 2nd and Wikipedia. Please have a look at the following pictures. The first one is from the former and the second one is from the latter.
In this two statements I found the modules required are different. One is called graded and one is called finitely generated. But the results are the same. So I want to ask are these two kinds of modules equivalent? Or there is some connection between them. Someone has told me that there are two editions about this theorem. If this is the case, then which one is more general or widely used?
At the same time, I found Quillen-Suslin theorem from wikipedia, which states that
every finitely generated projective module over a polynomial ring is free. And from Jacobson's book mentioned above there is a corollary
Here I consider the corollary as Any projective graded R-module for $R=F[x_1,x_2,...,x_m]$ is free. I don't know whether I understand it in a correct way or not. Then are these two statements about projective modules to be free the same?
Thank you for your assistance.
AI: The two theorems are not the same. One is about graded modules and the other is about modules.
The two results are related, though. As you say, every projective graded module (that is every projective object of the category of graded modules and homogeneous maps of degree zero) is free as a module, and the conection between the two theorems stems from that.
The second theorem is true even if you remove the hypothesis that the module $M$ be finitely generated.
Finally, that corollary of Quillen-Suslin should be understood in the sense that every projective module is graded free for some grading which is to be found. |
H: If $\lim_{t\to\infty}\gamma(t)=p$, then $p$ is a singularity of $\gamma$.
I'm trying to solve this question:
Let $X$ be a vectorial field of class $C^1$ in an open set
$\Delta\subset \mathbb R^n$. Prove if $\gamma(t)$ is a trajectory of
$X$ defined in a maximal interval $(\omega_-,\omega_+)$ with
$\lim_{t\to\infty}\gamma(t)=p\in \Delta$, then $\omega_+=\infty$ and
$p$ is a singularity of $X$.
The first part is easy because $\gamma$ is contained in a compact for a large $t$, my problem is with the second part, I need help.
Thanks in advance
AI: For $n \geq 0$, let $t_n \in (n,n+1)$ such that $\gamma'(t_n)=\gamma(n+1)-\gamma(n)$ (use mean value theorem).
Then $\gamma'(t_n)=X(\gamma(t_n)) \underset{n \to + \infty}{\longrightarrow} X(p)$ and $\gamma'(t_n) \underset{n \to + \infty}{\longrightarrow} p-p=0$. Thus $X(p)=0$. |
H: How prove this $\left(\sqrt{a^2+b^4}-a\right)\left(\sqrt{b^2+a^4}-b\right)\le a^2b^2$
let $a,b\in R$,and such that
$$\left(\sqrt{a^2+b^4}-a\right)\left(\sqrt{b^2+a^4}-b\right)\le a^2b^2$$
prove that $$a+b\ge 0$$
I think this is very beatifull problem, have you nice methods? Thank you,
I have see this problem
$$\left(x+\sqrt{x^2+1}\right)\left(y+\sqrt{y^2+1}\right)=1$$
then we have $$x+y=0$$
This problem have some nice methods,
let $f(x)=\ln\left({x+\sqrt{x^2+1}}\right)$, then $f(x)$ is increasing and is odd funciotion, and
$$\ln\left({y+\sqrt{y^2+1}}\right)=-\ln{-y+\sqrt{y^2+1}}=-f(-y)$$
so
$$f(x)+f(-y)=0\Longrightarrow f(x)=f(-y)$$
then $x+y=0$
and I have see this problem
$$\left(x+\sqrt{y^2+1}\right)\left(y+\sqrt{x^2+1}\right)=1$$
then we have $$x+y=0$$
solution: let $$x=\dfrac{1}{2}\left(u-\dfrac{1}{u}\right),y=\dfrac{1}{2}\left(v-\dfrac{1}{v}\right)$$
then we have
$$\dfrac{(uv-1)((u+v)^2uv+(u-v)^2)}{u^2v^2}=0$$
since $$(u+v)^2uv+(u-v)^2\ge 0$$
so $$uv=1$$
then $$x+y=\dfrac{1}{2}\left(u+v-\dfrac{1}{u}-\dfrac{1}{v}\right)=0$$
AI: Step 1: Multiply by the conjugate, we get that
$$ b^4 a^4 \leq a^2 b^2 ( \sqrt{a^2 + b^4} +a ) ( \sqrt{ b^2 + a^4} + b), $$
$$ a^2 b^2 \leq ( \sqrt{a^2 + b^4} +a ) ( \sqrt{ b^2 + a^4} + b)$$.
This gives us the chain of inequalities
$$ ( \sqrt{a^2 + b^4} -a ) ( \sqrt{ b^2 + a^4} - b) \leq a^2 b^2 \leq ( \sqrt{a^2 + b^4} +a ) ( \sqrt{ b^2 + a^4} + b)$$
Take the extreme ends, we get $0 \leq a \sqrt{b^2 + a^4} + b \sqrt{a^2 + b^4} $.
Step 2: Consider cases
If $a, b <0$, then the terms in the LHS are clearly greater than $b^2 \times a^2$, hence the inequality is not true.
If $a,b \geq 0$, then clearly $a+b \geq 0.$
Hence, we may assume that $ a \leq 0 \leq b$, and we want to show that $ -a \leq b$. Now, because we are used to dealing with positive (non-negative reals), let me replace $a$ with $-a$ (not necessary, but simplifies considerations later)
Step 3: With this substitution, the inequality in step 1 gives us
$$ a\sqrt{b^2 + a^4} \leq b \sqrt{a^2 + b^4}$$
Since the LHS is non-negative, we may square it to obtain
$$a^2 (b^2 + a^4) \leq b^2 (a^2 + b^4 \Rightarrow a^6 \leq b^6 \Rightarrow a \leq b.$$
But this is what we want to show in Step 2, hence we are done. (remember we substituted $a$ for $-a$.)
Step 1 gives you another way to prove your equality case. Namely, we get that
$$ 0 = y \sqrt{x^2+1} + x \sqrt{y^2+1} $$
Hence, we have $ - y \sqrt{x^2 + 1} = x \sqrt{y^2+1} \Rightarrow y^2(x^2+1)=x^2(y^2+1) \Rightarrow y^2=x^2$. Then check that $y=x$ is not a valid solution (unless $y=x=0$), hence we must have $y=-x$.
This seems much more direct than your approach, and is motivated by considering conjugates.
The inequality in step 1 can also be obtained directly, by expanding and showing that
$$ 0 \leq \sqrt{a^2+b^4} \sqrt{b^2+a^4} + ab - a^2b^2 \leq a \sqrt{b^2 + a^4} + b \sqrt{a^2 + b^4}$$
However, this is not immediately obvious from the question. It's more of 'on hindsight'.
I'd be interested in seeing how the equality case can be obtained through direct expansion. (I don't see how to do this as yet.) |
H: Using Spherical coordinates find the volume:
Inside the surfaces $z=x^2+y^2$ and $z=\sqrt{2-x^2-y^2}$
I integrated over the ranges:
$0 \leq \theta \leq 2\pi$
$ 0 \leq \phi \leq \frac{\pi}{2}$
$0 \leq r \leq \sqrt{2}$
I get $\frac{\pi}{2}(4\sqrt{2} -4).$
There answer is the same except a $-\frac{7}{2}$ instead of the 4 at the end. Obviously I'm missing a 1\2 but I seems, I can not find it.
AI: When $z=x^2+y^2=r^2$ and $z=\sqrt{2-x^2-y^2}=\sqrt{2-r^2}$ intersect each other, you will have $$r=1$$ This means that $z=1$ and $\tan(\phi)=1$. So the range of changing for $\phi$ will be $\pi/4\leq\phi\leq\pi$. While typing, I saw @Mhenni did it completely, so I am ending. :-) |
H: Relationships of Eigenvalues in Algebraic Closure
Suppose that $k$ is a field, and $A \in M_n(k)$ is a matrix that becomes diagonalizable over $\overline{k}$, the algebraic closure of $k$. Let $\lambda_1, \ldots, \lambda_n$ denote the (not necessarily distinct) eigenvalues of $A$ in $\overline{k}$.
Must each $\lambda_i$ be in $k$?
Must $\sum_i \lambda_i$ be in $k$?
Must $\sum_i \lambda_i^2$ be in $k$?
Must $\sum_{i<j} \lambda_i\lambda_j$ be in $k$?
Must $\sum_{i<j} (\lambda_i - \lambda_j)$ be in $k$?
I think #1 is almost certainly false, otherwise the algebraic closure wouldn't be necessary. For #2, I believe that it is true, since the sum of the diagonal entries (all elements of $k$) is equal to the sum of the eigenvalues. I am not even sure how to start for the rest.
AI: Hint: the characteristic polynomial of a matrix, $\chi_A(x) := \det(xI-A)$, has as roots the eigenvalues of $A$. Can you express your sums in terms of this polynomial's coefficients?
(Think in terms of symmetric polynomials. If your sums are not symmetric, you should look for counter-examples, perhaps with $k=\mathbb{R}$.)
Further notes: If $\lambda_1,\ldots,\lambda_k$ are the eigenvalues of $A$, then
$$\chi_A(x)= (x-\lambda_1)\cdots (x-\lambda_k).$$
The constant coefficient is then $\pm \lambda_1 \cdots \lambda_k$, and the $x$-coefficient is $$\pm\bigg((\lambda_2 \cdots \lambda_k)+(\lambda_1\lambda_3 \cdots \lambda_k)+(\lambda_1\lambda_2 \lambda_4 \cdots \lambda_k) + \cdots + (\lambda_1 \lambda_2 \cdots \lambda_{k-1}\bigg)$$
(It's easiest to ignore the exact signs.) There are general formulas for these coefficients in terms of the roots of our polynomial, called the elementary symmetric polynomials. It is a fundamental result that any symmetric polynomial (that is, a polynomial that remains unchanged by any permutation of it's inputs) can be written as a sum of symmetric polynomials.
For your problem, you'll want to use that the value of any symmetric polynomial lies in the field generated by the values of your elementary symmetric polynomials. But the elementary symmetric polynomials show up as the coefficients of $\chi_A$, so they lie in the base field. |
H: $H_{I}^{n}(M)\cong H_{I}^{n}(R)\otimes_R M.$
Let $R$ be a Noetherian ring and $I$ an ideal of $R$. If $n$ is the cohomological dimension of $I$, then why is the following isomorphism true:
$$H_{I}^{n}(M)\cong H_{I}^{n}(R)\otimes_R M.$$
The cohomological dimension of $I$ is defined to be the supremum of the set of integers $i$ such that $H_{I}^{i}(M)\neq 0$ for some $R$-module $M$.
AI: I will work with total derived functors, I hope it is OK.
So, let us look at the complex $R\Gamma_I (R)$. Note that because we can compute it using the Čech complex, one always has (over noetherian rings):
$R\Gamma_I(M) \cong R\Gamma_I(R) \otimes_R M$ for any $M$.
Now, let us calculate the $n$th cohomology of both sides:
LHS: $H^n(R\Gamma_I(M)) = H^n_I(M)$ - simply by definition.
RHS: $H^n(R\Gamma_I(R) \otimes_R M) = H^{(n+0)}(R\Gamma_I R \otimes_R M) \cong H^n(R\Gamma_I R) \otimes_R H^0(M)$ $=$ $H^n_I(R) \otimes_R M$. This follows from the Künneth spectral sequence since for $i>n$ we have $H^i_I(R) = 0$, and for $j>0$ we have $H^j(M)=0$, because we consider $M$ as a complex concentrated in degree $0$.
Comparing both, we get the required isomorphism:
$H^n_I(M) \cong H^n_I(R) \otimes_R M$. |
H: Values of a parameter $x$ in an infinite series that makes it converge
I am required to find the values of $x$ in the following infinite series, which cause the series to converge.
$$\sum_{n=1}^\infty \frac{x^n}{\ln(n+1)}$$
I tried to use the ratio test, and found that the series converges when $x$ is in $(-1,1)$. However, this answer is not correct.
AI: You are largely right when you say that the series converges for all $x$ in the interval $(-1,1)$. However, the series also converges for $x=-1$, because then it is an alternating series.
The series diverges at $x=1$, and also whenver $|x|\gt 1$. |
H: Helpful to review certain calculus topics before first real analysis course?
This is my first time posting, so I apologize in advance if my question is inappropriate here. I wanted to know if it would be beneficial for me to review certain calculus topics before I take my first real analysis course. I have noticed that some calculus topics are involved in real analysis courses (e.g. sequences, series, definition of limit). If this is true, then what calculus topics would be helpful to review? If this is not the case, what self-study methods would most likely benefit me before I journey into proof-based math?
EDIT: I am completely new to proof-based math, so I am looking for a couple of self-study suggestions that would likely prepare me well for the rigor of real analysis. By self-study suggestions, I mean reviewing certain calculus topics (if it would be helpful), and/or certain titles of books, etc.
AI: Certainly, a solid understanding of limits would help.
By this, I don't just mean that you can look at $\lim_{x\rightarrow 1}(x^2-1)/(x-1)$ and be able to compute the answer. I mean that you have a picture in mind, understand how the $\varepsilon\mbox{-}\delta$ language relates to that picture and (ideally) are able to write down the $\varepsilon\mbox{-}\delta$ proof.
Beyond that, maybe the way you think about calculus is more important than the topics (which are pretty standardized) that you know. For example, when you look at the equation $x^3-x-1=0$ do you think purely algebraically? Or do you think in terms of a picture? Can you tie that picture to theorems, such as the intermediate value theorem? |
H: Basic independent probability question
This question is a homework question.
The question states:
An airline can seat 100 people. Historically, the airline has noticed that each customer shows up independently and with probability $0.80$. The airline sells 101 tickets. What is the probability that there will be at least one unhappy customer?
My guess is that because this is about independent events, you add probabilities together?
So, suppose the airline could seat one person, and the customer had a 80% chance of showing. Well, the chance of the customer showing would be 80% right?
Now supposing the airline could seat 100 people, and each customer has an independently $0.80$ chance of showing, wouldn't the probability of all 100 seats being filled be $(0.8 * 100) = 80 $?
Since the airplane can only seat 100 people, but there are 101 tickets sold, there would be one unhappy customer if 101 people bought the ticket because the last person wouldn't have a seat. So I interpreted the question to imply: what is the probability of 101 customers showing up?
So my answer is $(0.8 * 101) = 80.8$
Is my reasoning correct, and if not, why not? Thanks.
Shoot .. aren't probabilities supposed to be $< 1$? Maybe it's $0.8^{100}$ or $0.8^{101}$ but now I'm just guessing. Our teacher left a hint saying don't be surprised if the probabilities are "very, very small", so it must be the $^$ operator right? I don't really know why though...
AI: Hint: Think of a coin purse that can only hold 1 coin. I have 2 fair coins, and flip them, putting the coins that come up heads into the purse. What's the probability the purse won't hold them all? Note that the purse won't hold them all $\iff$ both coins come up heads, and this is equal to
$$P(1^{\text{st}}\text{ coin = H})\times P(2^{\text{nd}}\text{ coin = H})=\left(\frac{1}{2}\right)\times\left(\frac{1}{2}\right)=\frac{1}{4}.$$
(Remember that the definition of two events $A$ and $B$ being independent is that $P(A\text{ and }B)$ is equal to $P(A)\times P(B)$; flipping each coin is independent, since the outcome of one coin can't affect the other.)
Can you generalize this simple example to your case? |
H: $A^3 + A = 0$ then $rank (A) = 2$
Let $A$ be a $3\times 3$ non-zero real matrix and satisfies $A^3 + A = 0$. Then prove that $rank (A) = 2$.
As $A$ is satisfying $A^3 + A = 0$, so $0$ is an eigen value of $A$.So $\operatorname{rank} (A) < 3$. So $\operatorname{rank} (A) = 0,1,\text{or}\, 2$. Clearly $\operatorname{rank}(A) = 0$ is not possible as $A$ is a non-zero matrix.How to show that $\operatorname{rank} (A) = 1$ is also not possible?
AI: Well, the other two possible eigenvalues are $i$ and $-i$, as they are roots of the polynomial
$$x^3+x=0\,.$$
Rank $1$ is possible, if you allow complex matrices. For example
$$ A:=\left(\begin{matrix}i&0&0\\0&0&0\\0&0&0\end{matrix}\right) $$
has rank $1$ and satisfies $A^3+A=0$. Otherwise, if you ask for real values matrices, then the set of eigenvalues will be conjugated, i. e. there will be both eigenvalues $i$ and $-i$, therefore the rank must be $2$ or $0$. |
H: What is the derivative of $\ln(4^x)$?
What is the derivative of $\ln(4^x)$ (which I believe is also equal to $x\ln4$)?
Is it $\dfrac{1}{x\ln4}$?
AI: Recall that for $a>0$, we have
$$\log(a^b) = b \log(a)$$
Also, note that
$$\dfrac{d}{dx}\left( cx\right) = c$$
I trust you can finish it from here. |
H: Finite Extensions and Roots of Unity
Two questions; the hint I've been provided is that they are, in fact, related.
Prove that a finite extension of $\mathbb{Q}$ contains finitely many roots of unity.
What is the largest (finite) order an element of $GL_{10}(\mathbb{Q})$ can have?
I am not sure how to approach the first problem. Does the second have something to do with the relationship between field extensions and vector spaces over a base field?
AI: Suppose $K$ is a number field of degree $d$ which contains infinitely many roots of unity. It then contains roots of unity of arbitrary high order (beecause there are finitely many of each order), and therefore it contains roots of cyclotomic polynomials $\Phi_n$ for infinitely many $n$. Let $S$ be the set of such $n$'s
Now the degree of $\Phi_n$ is $\phi(n)$ and $\phi(n)\geq\sqrt n$ for $n>6$. Since $S$ is infinite, there is an element $m\in S$ such that $\phi(m)>d$.
But then $K$ contains the field generated by a root of the polynomial $\Phi_m$, which is of degree $\phi(m)$. This is absurd.
For the second problem, notice that if a matrix in $GL_{10}(\mathbb Q)$ has finite order, it generates a subring of the ring of matrices $M_{10}(\mathbb Q)$ isomorphic to an extension of $\mathbb Q$ by a root of unity. This extension has degree at most 100, because it is contained in $M_{10}(\mathbb Q)$ (of course you can do much better in bounding this...) |
H: A limit of integrals
Let $f:[0,T]\to\mathbb{R}$ be a Lebesgue integrable function. For each $h>0$ we define the piecewise function $f_h$ by
$$f_h(t)=f(h\left[\frac{t}{h}\right])\quad\mbox{for}\quad t\in[0,T].$$
Can we affirm that
$$\lim_{h\to 0}\int_0^Tf_h(t)dt=\int_0^Tf(t)dt ?$$
*N.B :*$\left[ x\right]$ is the entire part of the real $x$.
AI: Let $T=1$ and let $f:[0,1]\to\mathbb R$ be the function which takes the value $0$ on rational numbers and $1$ on the irrational ones.
If $n\in\mathbb N$, then $f_{1/n}$ is the zero function. It follows easily from that that $\liminf\limits_{h\to0}\int_0^1f_h=0$ and therefore your equality cannot be true. |
H: Derivative of Linear Map
I'm reading Allan Pollack's Differential Topology and got stuck on this argument: In the second paragraph of page 9, section 1.2 he said
"Note that if $f:U\to \mathbf{R^m}$ is itself a linear map $L$, then $df_x=L$ for all $x\in U$. In particular, the derivative of the inclusion map of $U$ into $\mathbf{R^n}$ at any point $x\in U$ is the identity transformation of $\mathbf{R^n}$."
I don't quite understand his first sentence. How could $f=L$ and $L$ linear imply $df_x=L$? A counter example is $f(x)=x$ linear, then $f'(x)=1 \neq L$.
Thanks a lot for everyone's help!
AI: Formally, the situation is like this: there is a function $\phi:{\cal M}\to{\cal N}$ (between manifolds), and we want something to describe infinitesimal change. We associate to each $x\in{\cal M}$ a linear map, the differential, which is a transformation of tangent spaces $df_x:T_x{\cal M}\to T_{f(x)}{\cal N}$.
$\hskip 0.6in$
Image source: Wikipedia.
In Euclidean space, points themselves may be viewed as vectors and hence added or subtracted, and so the manifold and all of its tangent spaces may be identified together. That is, every tangent vector exists as a point in the original space (codomain). If $f:{\bf R}^n\to{\bf R}^m$ is differentiable, then the differential is the "directional derivative" as a linear function of the "direction." Explicitly, the matrix of this linear map $df_x$ is given by the Jacobian. The write-up says that the Jacobian matrix of the linear map $L:x\mapsto Ax$ is just $A$, hence $dL_x=L$ is an equality of maps for all $x$.
In the one-dimensional case, $f:{\bf R}\to{\bf R}$, the matrix of a linear map is $1\times1$, so essentially just a scalar value. The scalar is in fact $f'(x)$, so the differential is $df_x:v\mapsto f'(x)v$. In particular, $f(x)=x$ implies $f'(x)=1$ so $df_x:v\mapsto1v$ is the identity map, i.e. the same as $f$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.