text
stringlengths 83
79.5k
|
---|
H: A short exact sequence of groups and their classifying spaces
Suppose that we have a short exact sequence of topological groups:
$$1 \to H \to G \to K \to 1.$$
I have found some papers mentioning that the above sequence induces a fibration:
$$BH \to BG \to BK.$$
Here $B$ assigns to each (topological) group its classifying space.
My question:
How can we show the fibration?
Are there any good books/papers explaining the categorical properties of the classifying space functor $B$?
Note:
I found a relevant question at MO. But I can not see that $EK \times_K (EG/H)$ has the same homotopy type as $BG$.
At the same page at MO, the paper "Cohomology of topological groups" (by Segal) is suggested. But it is not available for me. So I am looking for other papers.
AI: I think there's a typo in that MO answer: $BG$ is modeled by $EG \times_G E(G/H)$ since this is just $BG \times E(G/H)$ and $E(G/H)$ is contractible.
Now we have a natural map $EG \times_G E(G/H) \rightarrow B(G/H)$ which is a fibration with fiber $(EG)/H \cong BH$.
For a great reference on all this stuff written in a very friendly style see:
http://www.math.washington.edu/~mitchell/Notes/prin.pdf |
H: Showing unique decomposition into parallel and orthogonal parts for any subspace
Given a general (possibly infinite dimensional) linear vector space $V$ with an inner product, how can you prove that for any subspace $S$, any vector v in $V$ can be uniquely expressed as
$$v = s + t$$
where $s \in S$ and $t$ is orthogonal to $S$?
AI: Set $d=\inf_{s\in S} d(v,s)$ to be the distance from $v$ to $S$. We show that finding the nearest point to $v$ lying within $S$ gives the solution.
Lemma 1 There exists $s\in S$ such that $\lVert v-s\rVert = d$ if we additionally assume $S$ complete.
Lemma 2 $t=v-s$ is orthogonal to $S$.
Lemma 3 This decomposition is unique.
Proof of 1 Take $d(s_n,v)\le d^2+\frac 1 n$. We show $s_n$ is Cauchy. Recall the parallelogram law $\lVert v_1+v_2\rVert^2+\lVert v_1-v_2\rVert^2=2(\lVert v_1\rVert^2+\lVert v_2\rVert^2)$. Set $v_1=v-s_n,v_2=v-s_m$. One finds $$\lVert s_m-s_n\rVert^2\le 2/m+2/n$$ and the result follows.
Proof of 2 Let $s'\in S$. $$\lVert v-(s+\lambda s')\rVert^2=\lVert v-s\rVert^2-2 \mathrm{Re}[\left<v-s,s'\right>]\lambda+\lVert s'\rVert^2 \lambda^2$$ but since $s$ minimized the length in question, considering small lambda the linear term must vanish. Hence $v-s$ is orthogonal to $S$.
Proof of 3 $v=s'+t'=s+t$ implies $(s'-s)+(t'-t)=0$. Taking inner products with each term shows both are zero.
As pointed out, for incomplete $S$ this fails. Just choose any situation like that of the finite sequences in $\ell^2$ where the natural 'orthogonal' terms form a series converging to a missing point in $S$. |
H: Diagonalizability in $\mathbb{R}$ and $\mathbb{C}$
Give an example of a matrix $A\in M_{n\times n}(\mathbb{R})$ that is not diagonalizable, but A is diagonalizable viewed as a matrix over the field of complex numbers $\mathbb{C}.$
AI: The characteristic polynomial $x^2+1$ of $\begin{pmatrix}0&-1\\1&0\end{pmatrix}$ is irreducible over $\mathbb R[x]$ but not over $\mathbb C[x].$ |
H: Conditional Probability Problem
An insurance company examines its pool of auto insurance customers and gathers the following information:
(i) All customers insure at least 1 car
(ii) 64% of all customers insure more than one car
(iii) 20% of the customers insure a sports car
(iv) Of those customers who insure more than one car, 15% insure a sports car.
What is the probability that a randomly selected customer insures exactly one car, and that car is not a sports car?
Let's use the following variable definitions:
O= owns 1 car, O' = owns more than 1 car
S= sports car, S' = Not sports car.
N() = Cardinal Number of a set
From statements (i)-(iii), we get the following: $N(O') = 64, N(O) = 36, N(S) = 20$
From statement (iv): $\Pr(S \mid O')=15$
We are asked to find $\Pr(S' \mid O)$
By definition:
$Pr(S' \mid O) = \cfrac{\Pr(S' \cap O)}{\Pr(O)}=\cfrac{\Pr(S' \cap O)}{N(O)}\tag{1}$
Pr()=N() since this is a uniform distribution--I interpret this when it says "randomly selected"
Next, I did:
$N(S)=N(S \cap O') + N(S \cap O)\tag{2}$
$0.2 = .64*.15 + .36 * x$
$x=0.28$, but we want 1-x because we want S' in $\Pr(S' \cap O)$ which equals 0.72.
See diagram below:
So plugging back into (1):
$Pr(S' \mid O) = \cfrac{0.71*.36}{.36}=.71$
but the answer is .26.
I mainly wanted to know why equation (2) is wrong. I know that is the crux of the problem. Why can't I use that equation in this case?
Any help is appreciated. Thank you.
AI: Your problem is very simple. The question asks for the probability $\mathrm P(S'\cap O)$ not $\mathrm P(S'|O)$.
Two things: I don't like the notation where probabilities are greater than 1. Divide them by a hundred or at least put percentage symbols. Also, your notation is rather convoluted. There's no need to define a counting function and a probability function when, as you know, both are identical here. |
H: Yitang Zhang: Prime Gaps
Has anybody read Yitang Zhang's paper on prime gaps? Wired reports "$70$ million" at most, but I was wondering if the number was actually more specific.
*EDIT*$^1$:
Are there any experts here who can explain the proof? Is the outline in the annals the preprint or the full accepted paper?
AI: 70 million is exactly what is mentioned in the abstract.
It is quite likely that this bound can be reduced; the author says so in the paper:
This result is, of course, not optimal. The condition $k_0 \ge 3.5 \times 10^6$ is also crude and there are certain ways to relax it. To replace the right side of (1.5) by a value as small as possible is an open problem that will not be discussed in this paper.
He seems to be holding his cards for the moment...
You can download a copy of the full accepted paper on the Annals page if your institution subscribes to the Annals. |
H: What functions are solution to a homogeneous system of differential equations?
Given a vector $\vec{u} \in \mathbb{R}^n$. For what functions $\psi(t)$ can $\vec{x}(t) = \psi(t)\vec{u}$ be a solution of $\dot{\vec{x}} = A \vec{x}$ for some $n \times n$ matrix $A$?
I'm trying to prove that $\psi(t)$ has to be of the form $e^{\lambda t}$ for some constant $\lambda$, but I'm not sure about that and I do not know exactly how to prove that.
AI: We have
$$\dot \psi u= A \psi u$$
Suppose that $\psi(t_0)\neq0$ for some $t_0$. Then by continuity, in some neighbourhood
$$ A u = \frac{\dot\psi}\psi u$$
(and hence $\dot \psi/\psi$ is an eigenvalue of $A$ assuming $u\neq0$.) But the left-hand side is a constant; therefore so is the other, and the result follows. |
H: Convergence of $\int_{-\infty}^\infty \frac{1}{1+x^6}dx$
Okay, so I am asked to verify the convergence or divergence of the following improper integrals:
$$\int_{-\infty}^\infty \frac{1}{1+x^6}dx$$
and
$$\int_1^\infty \frac{x}{1-e^x}dx$$
Now, my first attempt was to use comparison criterion with $$\int \frac{1}{x^2}$$
and conclude that both of the improper integrals converge given that they are smaller than the general term $\frac{1}{x^2}$.
Is it the right path? Also, are the antiderivatives of the improper integrals given easy to find?
Thanks.
AI: For the first one you can write:
$$\int_1^\infty \frac{dx}{1+x^6}\le \int_1^{\infty}\frac{dx}{1+x^2} = \frac{\pi}{2} - \frac{\pi}{4} =\frac{\pi}{4}$$
and
$$\int_0^{1} \frac{dx}{1+x^{6}}\le \int_0^{1} dx=1$$
This shows that
$$\int_0^{\infty}\frac{dx}{1+x^6}$$ exists and is finite. You can similarly show $$\int_{-\infty}^{0}\frac{dx}{1+x^6}$$
exists, and adding the two integrals shows the desired integral converges. |
H: Convergence of $\int_{0}^{1} \frac{\ln^{2} x}{x^{2}+x-2} \ dx $
How do you show that $\displaystyle \int_{0}^{1} \frac{\ln^{2}x}{x^2+x-2} \ dx $ converges?
The singularity at $x=1$ is not an issue since it is removable.
But what about at $x=0$?
AI: You can show that
$$\int_0^1 dx\ \ln^n x$$
converges by changing variables (it's a gamma function). Now, your denominator is benign in a neighborhood of $x=0$, and you can do a simple comparison to show that your integrand is smaller in magnitude than the integrand mentioned above. |
H: Matrix Equation, Solving for Variables.
I'm going through my exercises, and came across a problem that wasn't covered in our lectures. Here's the question:
$
\begin{align}
\begin{bmatrix}
a-b & b+c\\
3d+c & 2a-4d
\end{bmatrix}
\end{align}
=
$
$
\begin{align}
\begin{bmatrix}
8 & 1\\
7 & 6
\end{bmatrix}
\end{align}
$
What I have done so far is:
$
\begin{align}
\begin{bmatrix}
a-b-8 & b+c-1\\
3d+c-7 & 2a-4d-6
\end{bmatrix}
\end{align}
=
$
$
\begin{align}
\begin{bmatrix}
0 & 0\\
0 & 0
\end{bmatrix}
\end{align}
$
And the solving for variables,
$$
a-b-8 = 0
$$
$$
a-b = 8
$$
$$
a = \frac{8}{-b}
$$
$$
b+c-1=0
$$
$$
b+c=1
$$
$$
b=\frac{1}{c}
$$
$$
3d+c-7=0
$$
$$
3d+c=7
$$
$$
3d=\frac{7}{c}
$$
$$
d=\frac{7}{3c}
$$
$$
2a-4d-6=0
$$
$$
2a-4d=6
$$
$$
\frac{16}{-2b}-\frac{28}{12c}=6
$$
Am I going about this correctly? Or am I just doing this completely incorrect?
AI: Error 1: $a-b=8$ is not the same as $a = \frac{8}{-b}$ but $a = 8 + b$
Error 2: $b+c=1$ is not the same as $b = \frac{1}{c}$ but $b = 1-c$
Error 3: $3d+c=7$ ... same as errors 1 and 2
Error 4: $2a-4d=6$ Incorrect substitution because of previous errors. Fix the previous errors and then subsititue to fix this error |
H: What is a "distinguished subset"?
I don't know if this is another word for something I already know or if it is something altogether different. I'm reading my textbook in CS about distributed algorithms, and this came up.
I googled for a definition but couldn't find one.
The state set $Q$ contains a distinguished subset of initial states.
AI: A distinguished subset is one that is chosen out of all the subsets. In this case, some of the states are chosen to be "initial states". |
H: Cardinality of intersection of sets
Consider the following problem: find $n(A \cap B)$ if $n(A)=10$, $n(B)=13$ and $n(A \cup B) = 15$.
I know if I want to find the union I use the Cardinal Number formula:
$$n(A\cup B) = n(A) + n(B) - n(A\cap B)$$
But how do I do it the other way: to find $n(A\cap B)$?
Would it be $n(A\cap B) = n(A) + n(B) - n(A\cup B)$ ?
AI: Yes... $n(A\cap B) = n(A) + n(B) - n(A\cup B)$ is the way to go.
$n(A\cap B) = 8$ according to this.
A little Venn to visualize the formula: |
H: What is the mathematical definition of index set?
I find some descriptions http://en.wikipedia.org/wiki/Index_set and http://mathworld.wolfram.com/IndexSet.html .
But can't find any definition.
AI: An index set is just the domain $I$ of some function $f:I\to X$. It's just a notational distinction between a function domain and an index set - when we think if it as an index set, we write $f_i$ rather than $f(i)$.
Both the Wikipedia and Wolfram links you provide indicate that the function $f$ should be $1-1$ and onto, but I don't actually think that is necessary. For example, if we have a sequence $a_1,\dots,a_n,\dots$ then the index set is $\mathbb N$ whether or not the $a_i$ are distinct. |
H: number of zeros of function $\prod_{n=1}^{\infty}\left(1-\frac{z^2}{n^2}\right)-1$
$$f(z)=\prod_{n=1}^{\infty}\left(1-\frac{z^2}{n^2}\right)-1$$
How many zeros does the above function have in $\Bbb{C}$?
AI: As noted in a comment, this is essentially the problem of determining the number of solutions to $\sin(z)=z$. There are infinitely many. It turns out more generally, as I learned from Aryabhata's answer at this link, with reference to an example in MarkuΕ‘eviΔ's book at this link, that for every complex number $A$, the equation $\sin(z)=Az$ has infinitely many solutions in the complex plane. |
H: Cardinality of Cartesian Product of finite sets.
If $a = \{1,2,3\}$ and $b = \{a,b,c\},\;$ FIND $\;n(a\times b)$
Or is it impossible to multiply these sets?
What will be the answer?
AI: Let's use capital letters for sets: so let $$A = \{1, 2, 3\},\;\;\text{ and} \;\; B = \{a, b, c\}$$ and $n(A) = |A| = 3,\;n(B) = |B| = 3$.
Then the Cartesian-product $\,A\times B\;$ is a set of all ordered pairs $$A \times B= \{(a_i, b_j)\mid a_i \in A, b_j \in B\}.$$
In this case, $$A \times B = \{(1, a), (1, b), (1, c), (2, a),(2, b), (2, c), (3, a), (3, b), (3, c)\}$$
In general, for two sets $P, Q$, $$\;|P\times Q| = |P| \times |Q|$$
So, if $n(A \times B) = |A\times B|,\;$ then $n(A \times B) = n(A) \times n(B) = 3 \times 3 = 9$ |
H: What subjects should I study to learn about eigenfunctions? What good textbooks would you recommend for learning the subject?
I googled eigenfunction and look it up in wikipedia, but still I do not know where I should start to learn the subject. I have two questions, and allow me to repeat the title of this question.
What subjects should I study to learn about eigenfunctions?
What good textbooks would you recommend for learning the subject?
Thank you for any answers.
AI: This is not a very good answer to your question, but based on the article it looks like you might actually want to study partial differential equations. Eigenfunction expansion is a nice technique to help understand these equations, in exactly the same way eigenvectors let us understand linear transformations (some PDEs actually are just linear transformation, in a sense).
Since you want to understand PDE involved in population dynamics, I would suggest you look at a more applied book on PDEs. I unfortunately don't know any. A quick search for "Applied PDE Book" gave this as the number one on Amazon: http://www.amazon.com/Applied-Partial-Differential-Equations-Edition/dp/0130652431
This book appears to cover the topics you need to understand the paper, based on a very casual glance of both the paper and the contents of the book. I don't know the level of difficulty of the text, so be prepared to review or learn new things in linear algebra or analysis.
Many of the comments suggest learning functional analysis. I initially wanted to recommend this, but I'm afraid that if you go out and pick up a typical book on functional analysis, it will be too abstract and difficult to transfer to your paper. A sufficiently advanced PDE book should include the techniques from functional analysis you need. |
H: Independent Spaces
What does it mean for spaces to be independent?
AI: Subspaces $W_1,...,W_k$ of $V$ are said to be independent if the only combination $$w_1+\cdots+w_k=0$$ with each $w_j\in W_j$ is $w_1=\cdots=w_k=0$. (This definition should be a bit earlier on the page.) |
H: Does this algorithm terminate in finite time?
I am trying to determine whether the following algorithm terminates:
int n;
int s;
s=3n;
while s>0
{
if s is even
{
s=floor(n/4);
}
else
{
s=2s;
}
}
So far, I have tried to see whether I can come up with a pattern for how the algorithm behaves for arbitrary n. But I haven't had any luck so far. I'm pretty stumped right now. Any ideas or suggestions as to how I show that this algorithm terminates?
AI: Hint: for what $s$ will the loop terminate the next time through? You need $s=0$ at the end of the loop. How about trying it for $n=1$ through $8$? |
H: Find the modular residue of this product..
Please help me solve this and please tell me how to do it..
$12345234 \times 23123345 \pmod {31} = $?
edit: please show me how to do it on a calculator not a computer thanks:)
AI: We want to replace these big numbers by much smaller ones that have the same remainder on division by $31$.
Take your first big number $12345234$. Divide by $31$ with the calculator. My display reads $398233.3548$. Subtract $398233$. My calculator shows $0.354838$. Multiply by $31$. The calculator gives $10.999978$. If it were perfect, it would say $11$.
Do the same with the other big number. My calculator again says $10.99978$, and if perfect would say $11$.
Multiply $11$ by $11$, and find the remainder when $121$ is divided by $31$. Again, we could use the calculator. But it can be done in one's head. The remainder when we divide by $31$ is $28$.
Remark: It can be fairly important not to write down an intermediate calculator result, and rekey. The reason is that the calculator inernally keeps guard digits, that is, it is calculating to greater precision than it displays. If you rekey, typing in only digits that you see, you will lose some of the built-in accuracy that you paid for. For similar reasons, it is useful to learn to make use of the memory features of your calculator.
Let's ee why the procedure we used works. When we divide $12345234$ by $31$, the stuff before the decimal point is the quotient. The remainder is unfortunately not given directly, but what's "left over" is (approximately) $0.354838$. This is a decimal representation of $\frac{r}{31}$, where $r$ is the (integer) remainder. To recover $r$, we multiply $0.354838$ by $31$. Because of internal roundoff, we usually don't get an integer, but most of the time, if the divisor (here $31$) is not too large, we get an answer that is very close to an integer, so we can quickly decide what $r$ must be. |
H: Integration of $\displaystyle \int\frac{1}{1+x^8}\,dx$
Compute the indefinite integral
$$
\int\frac{1}{1+x^8}\,dx
$$
My Attempt:
First we will factor $1+x^8$
$$
\begin{align}
1+x^8 &= 1^2+(x^4)^2+2x^4-2x^4\\
&= (1+x^4)^2-(\sqrt{2}x^2)^2\\
&= (x^4+\sqrt{2}x^2+1)(x^4-\sqrt{2}x^2+1)
\end{align}
$$
Then we can rewrite the integral as
$$
\int\frac{1}{1+x^8}\,dx = \int \frac{1}{(x^4+\sqrt{2}x^2+1)(x^4-\sqrt{2}x^2+1)}\,dx$$
To use partial fractions let $t = x^2$ to get
$$
\frac {1}{(t^2+\sqrt{2}t+1)(t^2-\sqrt{2}t+1)} = \frac{At+B}{t^2+\sqrt{2}t+1}+\frac{Ct+D}{t^2-\sqrt{2}t+1}
$$
This method of solving the problem becomes very complex. Is there a less complex approach to the problem?
AI: Why not splitting up in fractions until you have first degree polynomials in the nominators?
$$\frac{1}{1+x^8}=\frac{A}{x-e^{i\pi/8}}+\frac{B}{x-e^{-i\pi/8}}+\frac{C}{x-e^{i3\pi/8}}+\frac{D}{x-e^{-i3\pi/8}}+\frac{E}{x-e^{i5\pi/8}}+\frac{F}{x-e^{-i5\pi/8}}+\frac{G}{x-e^{i7\pi/8}}+\frac{H}{x-e^{-i7\pi/8}}$$
or if you prefer without the complex numbers
$$\frac{1}{1+x^8}=\frac{ax+b}{x^2-2\cos(\pi/8)x+1}+\frac{cx+d}{x^2-2\cos(3\pi/8)x+1}+\frac{ex+f}{x^2-2\cos(5\pi/8)x+1}+\frac{gx+h}{x^2-2\cos(7\pi/8)x+1} \; .$$
With the complex formula, you can find the coefficients easily as follows
$$A=\lim_{x \to e^{i\pi/8}}\frac{x-e^{i\pi/8}}{1+x^8}\overset{\text{H}}{=}\lim_{x \to e^{i\pi/8}}\frac{1}{8x^7}=\frac{e^{-i7\pi/8}}{8}$$
where I used de l'HΓ΄pital's rule. |
H: Mathematical Systems Question Help
Alright, this is another question for my math for teachers course. This question is not actually in the homework, but there are problems similar to it. I'd really like to learn how to do problems like this one, so it would be much appreciated if I was given a solid explanation/demonstration. I don't really have a clue where to start, so I've defined the terms as best as I can under each question... The question should be answered using the data from the table.
A mathematical system is defined by the following table:
Is there an identity element in this system? If so, what is it?
Identity is: aXe=a and eXa=a
Is closure satisfied by this system? Explain.
Closure is: aXb
Is this system commutative? Explain.
Commutative is: aXb=bXa
Does every element have an inverse? Explain.
Inverse is: aXx=e and xXa=e
Is this system a group? Explain.
Not sure what they mean by this...
Are they referring to another property?.
AI: Scanning along the rows, we see that the second row is in alphabetical order. Similarly, scanning along the columns, we see that the second column is in alphabetical order. Thus, $b$ must be the identity element.
Since all elements in the grid are either $a,b,$ or $c$, we know that the system is closed under multiplication.
Since the grid is symmetrical with respect to the main diagonal (draw a line going from the top left to the bottom right), the system is commutative.
Since the identity element $b$ appears at least once in every row and column, every element has an inverse (note that inverses aren't necessarily unique, suggesting that it isn't a group).
A group is a set $G$ with a multiplication operation $\times$ that satisfies 4 properties: identity, closure, inverses, and associativity. Since the first 3 conditions are satisfied, it suffices to consider the last condition:
Definition: An operation $\times$ is considered associative iff for all $x,y,z\in G$: $$ (x \times y) \times z = x \times (y \times z)$$
Observe that this system is not associative (and thus not a group), since:
$$
(a \times a) \times c=b \times c = c\\
a \times (a \times c)=a \times b = a
$$
Yet $c \ne a$. |
H: Alternating functional Series Convergence SOS....
Does the following series converge?
$\sum_{k=0}^\infty \frac{(-1)^k x^k \sqrt{k}}{k!}$
what is the radius of convergence?!!
AI: Note that $\sqrt{k}<k.$ Then
$\left| \frac{(-1)^k x^k \sqrt{k}}{k!} \right|<\frac{|x|^k k}{k!}=\frac{|x|^k}{(k-1)!}.$
For the series $\sum\limits_{k=0}^\infty {\frac{|x|^k}{(k-1)!}}$ the radius of convergence $R=\lim\limits_{k\to\infty}\frac{k!}{(k-1)!}=\ldots$ |
H: Basic probability questions
How is P(A , B) different from P(A $\cap$ B)? I'm genuinely curious as to why one might prefer one over the other.
Also as far as some probability P (A | B , C) goes, what is the order of operations i.e., is it P (A | (B , C)) or P ((A | B) , C)? Or are they both similar?
AI: The expression $\Pr(A|B,C)$ is an abbreviation for $\Pr(A|(B\cap C))$. Since $B\cap C=C\cap B$, there is no precedence issue.
Sometimes $\Pr(A,B)$ is used as an abbreviation for $\Pr(A\cap B)$, but that is an uncommon practice. By way of contrast, one bumps often into the conditional probability version mentioned in the first paragraph. |
H: Check whether the following polynomial is irreducible over $\mathbf Q$
I was trying this problem from my Abstract Algebra book exercise that says:
Show that the polynomial $x^2+\frac 13x-\frac 25$ is irreducible in $\mathbf Q[x]$.
What I tried: $x^2+\frac 13x-\frac 25 \equiv 15x^2+5x-6=f(x)$,say.
Now I compute ,$f(x+1)=15x^2+35x+14$. Now, $ 7 \mid 35, 7 \mid 14; 7\nmid 15,7^2 \nmid 14$. Hence using Eisenstein's criterion ,$f(x+1)$ is irreducible over $\mathbf Q$. Hence $f(x)$ is irreducible over $\mathbf Q$.
Am I right?
I want to add another problem similar to the above :
Show that the polynomial $2x^3-x^2+4x-2$ is not irreducible in $\mathbf Z[x]$.
Here,$f(x+1)=2x^3+5x^2+8x+3$ where $f(x)=2x^3-x^2+4x-2$. Here ,I can not apply Eisenstein's criterion for $f(x+1)$. I can not also factorize $f(x)$.Then how Can I prove it?
Can someone point me in the right direction? Thanks in advance for your time.
AI: Your answer is:
$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,βββ² \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,ββ\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,βββββ\\
βββββββ±.ββββββ\\
ββββββ.βββββββ\\
ββββββ.βββββββ\\
βββββββ²βββββ$
Post question's edit:
You want to prove that the polynomial is reducible in $\Bbb Z[x]$. By (the second) Gauss's lemma, if it is reducible in $\Bbb Q[x]$, then it also is in $\Bbb Z[x]$. Since its degree is $3$, it is reducible (over $\Bbb Q$) if, and only if, there is a rational root. Now note that in case Eistein's criterion doesn't work, you still won't know wether it is reducible or not, so that shouldn't be the way to go. You want to find a rational root. Try finding roots using the rational root theorem.
Something else you can try is to factor your polynomial. If you can't do this, you can always put the polynomial as input in a very well known on-line software. |
H: Two problems on analytic function and Mapping of elementary functions
Let $G$ be a region and let $f$ and $g$ be analytic functions on $G$ such that $f(z)g(z)=0$ for all $z \in G$. Show that either $f$ or $g$ is identically zero on $G$.
Here is how I do it: Assume $f$ is non zero on $B(a,R)$, then $fg=0$ implies $a$ is a root of $g$ (of order $n>0$). Therefore $g(z)=(z-a)^nF(z)$ for some non zero analytic function $F(z)$. And then I can't move any more...
The second question is to determine the image of $\{z=x+iy : -\pi < x < \pi,
y=3\}$ under the mapping $u+iv = w = \sin z$.
AI: This is a consequence of the identity theorem: Assume $f \not= 0$, then $$N := \{z \in G; f(z) = 0 \}$$ has no accumulation point (otherwise, the identity theorem would imply $f=0$ since $G$ is a region). Clearly, $g|_{G \setminus N} = 0$ since $f \cdot g=0$. Since $G \setminus N$ has an accumulation point, the identity theorem states $g=0$.
By definition, $$\sin z = \frac{1}{2\imath} (e^{\imath \, z} - e^{-\imath \, z})$$ Now set $z=x+3\imath$, $x \in (-\pi,\pi)$, to calculate the image of the given set. |
H: Find $u\in\mathbb{R}$ such that $\mathbb{Q}(u) = \mathbb{Q}(2^{1/2}, 5^{1/3})$.
I am having trouble finding such a $u$. My instincts at first told me to do the obvious thing and let $u = 2^{1/2}5^{1/3}$ but $u^{2} = \left(2^{1/2}5^{1/3}\right)^{2} = 2\cdot5^{2/3}$ but we want $\frac{1}{2}u^{2} = 5^{1/3}$ right?? Any hints on how to approach the exercise?
Then the next part ask to describe how you would find all $w\in\mathbb{Q}(2^{1/2}, 5^{1/3})$ such that $\mathbb{Q}(w) = \mathbb{Q}(2^{1/2}, 5^{1/3})$ I have no approach on this one except to possible look at linear combinations of the basis terms but dont know how to go about it.
AI: Hint: Use the proof of the primitive element theorem. |
H: Prove that $\sin^2{x}+\sin{x^2}$ isn't periodic by using uniform continuity
Before the problem is a proof that says a periodic function whose domain is $\mathbb{R}$ is uniformly continuous.So actually the problem is to prove $\sin^2{x}+\sin{x^2}$ isn't uniformly continuous.I hope to fellow the problem.Thanks for the zealous!
AI: We proceed from the definition of uniform continuity. (One can get the result more simply by considering the derivative of $\sin(x^2)$.)
It is enough to show that $\sin(x^2)$ is not uniformly continuous. For if $\sin^2 x+\sin(x^2)$ were uniformly continuous, then since $\sin^2 x$ is uniformly continuous, the difference $\sin^2 x+\sin(x^2)-\sin^2 x$ would be uniformly continuous.
Let $\epsilon=\frac{1}{2}$. We show that there does not exist a $\delta\gt 0$ such that for all $x$, if $|x-y|\lt \delta$, then $|\sin(x^2)-\sin(y^2)|\lt \epsilon$.
Let $\delta$ be a fixed positive quantity. Let $x=\sqrt{2n\pi}$, where $n$ is a large integer to be chosen later. Then $\sin(x^2)=0$. Let $n$ be such that $2\delta\sqrt{2n\pi}\gt \frac{\pi}{2}$. Note that
$$(x+\delta)^2=x^2+2x\delta+\delta^2\gt x^2+2x\delta\gt 2n\pi +2\delta\sqrt{2n\pi}\gt 2n\pi+\frac{\pi}{2}.$$
It follows that there is a $y$ such that $x\lt y\lt x+\delta$ and $\sin(y^2)=1$. In particular, $|\sin(x^2)-\sin(y^2)|\gt \frac{1}{2}$. This completes the proof. |
H: find the condition on A for the summation to be convergent
The summation is:
$$\sum_{n=1}^\infty \frac{ \sqrt { n + 1 } - \sqrt n }{n^A}$$
I don't know how to even begin. Hints??
AI: Hint: A standard beginning is to multiply top and bottom by $\sqrt{n+1}+\sqrt{n}$.
We end up with
$$\sum_{n=1}^\infty \frac{1}{n^A(\sqrt{n+1}+\sqrt{n})}.$$
Now note that $\sqrt{n}\lt \sqrt{n+1}\le 2\sqrt{n}$. From this we obtain
$$ \frac{1}{3n^{A+1/2}} \lt \frac{1}{n^A(\sqrt{n+1}+\sqrt{n})}\lt \frac{1}{2n^{A+1/2}}.$$
These inequalities should be enough to do the appropriate comparisons with familiar series. |
H: Prove that this set (involving fractional part of any rational number) is a partition of the set of rationals.
For any rational number $x$, we can writte $x=q+\,n/m$ where $q$ is an integer and $0\le n/m<1$. Call $n/m$ the fractional part of $x$. For each rational $r\in \{x : 0\le x<1\}$ ,let $A_r = \{ x\in \Bbb Q: \text{the fractional part of $x$ is equal to}\; r\}$.
Prove :$\{A_r : 0\le r<1\}$ is partition of $\Bbb Q$
AI: HINT: To prove that $\{A_r:r\in\Bbb Q\cap[0,1)\}$ is a partition of $\Bbb Q$, you must show that each $A_r$ is a non-empty set of rational numbers, and that every rational number belongs to exactly one $A_r$. Itβs clear from the definition that each $A_r\subseteq\Bbb Q$, and you should have no trouble exhibiting a particular rational number that belongs to $A_r$. That leaves you only two things to show:
each rational number belongs to some $A_r$ with $r\in\Bbb Q\cap[0,1)$, and
if $r,s\in\Bbb Q\cap[0,1)$, and $r\ne s$, then $A_r\cap A_s=\varnothing$.
The first sentence of your question gives the justification for (1), so really all thatβs left is to prove (2). I suggest proving the contrapositive: if $x\in A_r\cap A_s$, then $r=s$. |
H: Show reflexive normed vector space is a Banach space
$X$ is a normed vector space. Assume $X$ is reflexive, then $X$ must be a Banach space.
I guess we only need to show any Cauchy sequence is convergent in $X$.
AI: Hint: (1) If $X$ is reflexive, $X$ is isomorphic to $X^{**}$. (2) Dual spaces are allways complete.
Regarding (2), we will prove, that $L(X,Y)$ the space of bounded linear operators from $X$ to $Y$ is complete in the operator norm if $Y$ is complete. Then (2) follows, as $X^* = L(X, \mathbb K)$ and $\mathbb K$ is complete. So let $(T_n)$ be an operator norm Cauchy sequence, then $(T_n x)$ is Cauchy for each $x$, as $\def\norm#1{\left\|#1\right\|}$
$$ \norm{T_nx-T_mx} \le \norm{T_n - T_m}\norm x $$
As $Y$ is complete, we may define $T\colon X \to Y$ by $Tx := \lim_n T_n x$. $T$ is linear, as the $T_n$ and the limit is, an bounded since
$$ \norm{Tx} \le \sup_n\norm{T_n x} \le \sup_n\norm{T_n}\cdot \norm x $$
and Cauchy sequences are bounded. Now given $\epsilon > 0$, we can find a $N$, such that
$$ \norm{T_n - T_m} < \epsilon, \text{ all $n,m \ge N$} $$
giving
$$ \norm{T_n x - T_m x} < \epsilon, \text{ all $\norm x \le 1$, $n,m \ge N$} $$
for $m \to \infty$
$$ \norm{T_n x - T x} \le \epsilon, \text{ all $\norm x \le 1$, $n\ge N$} $$
that is $\norm{T_n - T} \le \epsilon$, $n \ge N$. So $T_n \to T$. |
H: Infinite imprimitive non abelian group?
My new question is
Is there an infinite, imprimitive and non abelian group?
Thank you for the further answers.
AI: Consider the subgroups $A={\rm Sym}(2{\bf Z})$ and $B={\rm Sym}(1+2{\bf Z})$ sitting inside $G={\rm Sym}({\bf Z})$. So $G$ is the set of bijections from the set of integers to itself, $A$ is the set of permutations of the even integers or equivalently the permutations which fix all odd integers, and $B$ the set of permutations of the odd integers or equivalently the permutations which fix all even integers. Let $h:x\mapsto x+1$ be the simple forward translation map. Then $H=\langle A,B,h\rangle$ is an infinite group which acts transitively on the integers and preserves the nontrivial partition ${\bf Z}=2{\bf Z}\cup(1+2{\bf Z})$. |
H: Is the truncated exponential series for matrices injective?
If $k$ is a field of characteristic $p$, we can define a map $\exp:\mathfrak{gl}_n(k)\to GL_n(k)$ by:
$$\exp(A)=\sum_{i=0}^{p-1}\frac{A^i}{i!}$$
In the answer to this question, we see that if $A^p=B^p=0$, and if $\exp(A)=\exp(B)$, then $A=B$. So if $p>n$, $\exp$ is injective when restricted to nilpotents. I'd like to know whether or not $\exp$ is injective on all of $\mathfrak{gl}_n(k)$.
AI: I am not sure if there is any nice characterisation of injectiveness, but the truncated matrix exponential is not injective in all cases. Let $p=3>n=2$ and $A=\pmatrix{1&\ast\\ 0&0}$. Then $A^m=A$ for every $m\ge1$ and
$$\exp(A) = I + A + \frac12 A^2 = I + A + 2A = I = \exp(0).$$ |
H: Linear independence of $(x\sin x)^{\frac{n-1}{2}}$ and $(x\sin x)^{\frac{n+1}{2}}$
Could you tell me why $(x\sin x)^{\frac{n-1}{2}}$ and $(x\sin x)^{\frac{n+1}{2}}$ are lineraly independent?
I've tried $\alpha(x \sin x)^{\frac{n-1}{2}} + \beta (x\sin x)^{\frac{n+1}{2}} =0$
$(x\sin x)^{\frac{n-1}{2}}(\alpha + \beta (x\sin x))=0$
but I don't know how that implies linear independence.
$x \in [0, \frac{\pi}{2}]$, so $(x\sin x) \ge 0$ on this interval.
AI: If $\alpha(x \sin x)^{\frac{n-1}{2}} + \beta (x\sin x)^{\frac{n+1}{2}} =0$ (the zero function), clearly it cannot happen that exactly one of $\alpha,\beta$ is zero. If both $\alpha$ and $\beta$ are nonzero, then by rearranging terms, we have $x\sin x=-\alpha/\beta$ for all $x$ such that $x\sin x\neq0$. I think it is easy enough to rule out this case. |
H: Counting Cosets of $\langle\tfrac12\rangle$, in $\Bbb{R}$ and in $\Bbb{R}^{\times}$
Describe the cosets of the subgroups described:
The subgroup $\langle\frac{1}{2}\rangle$ of $\mathbb{R}^{\times}$, where $\mathbb{R}^{\times}$ is the group of non-zero real numbers with multiplication.
The subgroup $\langle\frac{1}{2}\rangle$ of $\mathbb{R}$, where $\mathbb{R}$ is the group of real numbers with addition
AI: A coset of this subgroup is a set of all real numbers who's pairwise ratio is a power of $2$ (really a power of $\frac{1}{2}$, but that's the same thing). For instance, the coset containing $\pi$ also contains $16\pi$ and $\frac{\pi}{128}$, because $16$ and $\frac{1}{128}$ are both powers of $2$.
Same thing, but this time the defining feature is that pairwise differences is a multiple of $\frac{1}{2}$. That means that the coset containing for instance $\log 53$ also contains $\frac{2197}{2}+\log 53$. |
H: Probability generating function and expectation
Let $X$ be Poisson random variable with parameter $Y$, where $Y$ is Poisson random variable, with parameter $\mu$. Prove that, $G_{X+Y}(s)=\exp\{\mu (s\exp^{s-1}-1)\}$
I know that, Poisson r.v. generating function is $G(s)=\exp\{\lambda(s-1)\}$. Do I need to calculate joint probability distribution first ($P(Z)$), where $Z=X+Y$?
AI: Hint: As recalled in the post, $E[s^X\mid Y]=\exp\{Y(s-1)\}$ hence $G_{X+Y}(s)=E[s^Xs^Y]$ is also $G_{X+Y}(s)=E[E[s^X\mid Y]s^Y]=E[\exp\{Y(s-1)\}s^Y]=E[t^Y]$ for $t$= $____$. Furthermore, for every $t$, $E[t^Y]=\exp\{\mu(t-1)\}$ hence $G_{X+Y}(s)=$ $______$. |
H: If $E(|X+Y|^p)<\infty$, then $E(|X^p|)<\infty$ and $E(|Y^p|)<\infty$.
If $X$ and $Y$ are independent and for some $p>0$: $E(|X+Y|^p)<\infty$, then $E(|X^p|)<\infty$ and $E(|Y^p|)<\infty$.
How can I go from $E(|X+Y|^p)<\infty$ using independence to conclude something about $X$ all alone?
AI: By independence,
$$
E[|X+Y|^p]=\int\limits_{}E[|X+y|^p]\mathrm dP_Y(y).
$$
Hence, if $E[|X+Y|^p]$ is finite then $E[|X+y|^p]$ is finite for $P_Y$-almost every $y$. In particular, there exists some $y$ such that $E[|X+y|^p]$ is finite. Maybe you can finish from here. |
H: Continuity in metric space
Let $F: \mathbb{R}^2 \rightarrow \mathbb{R}^3$ be defined by
$$ F(x,y) = \left( x^3 y,\ \ln(x^2 + y^2 + 1),\ \cos(x - y^2) \right) $$
When trying to show why $F$ is continuous where should I start?
AI: To give the answer of your question let me define first the vector valued functions.
A vector-valued function is a function from $\mathbb{R}$, or a subset of $\mathbb{R}$, to a
vector space $\mathbb{R}^n$. It comprises $n$ scalar functions, one for each of the coordinates. For instance, given scalar functions $f_1$, $f_2$...$f_n$, we can construct a vector-valued function $f = (f_1, f_2, \ldots f_n) $defined by
by $t \rightarrow (f_1(t), f_2(t), \ldots f_n(t))$.
Now come to your question:
The vector valued function is continuous at point $a$
if and only if all its components are continuous at $a$. Moreover,
a vector valued function is continuous on the set $S$ if and only if
all its components are continuous on the set S. |
H: Is $e^z\sum_{k=0}^\infty\frac{k^3}{3^k}z^k$ analytic inside $|z|=3$?
Am I correct that the following function is analytic at least inside $|z|=3$? (I used the ratio test.) The solutions manual says that the function is analytic on and inside |z|=1, so I wonder if I'm having a conceptual misunderstanding. $$e^z\sum_{k=0}^\infty\frac{k^3}{3^k}z^k$$
Also, is it possible to see/guess from a glance the radius of convergence of this particular function?
AI: Well, $e^z$ is an entire function, which never vanishes. So analyticity of your function is equivalent to the analyticity of the power series $\sum k^3/3^kz^k.$
This series converges for $|z|<3,$ as you suggested. |
H: Is it possible for a function to be analytic anywhere outside the circle of convergence of its power series expansion?
Is it possible for a function to be analytic anywhere outside the circle of convergence of its power series expansion? I'm referring to analytic fuctions of course (i.e. those with power series expansions but not necessarily analytic over the entire complex plane).
AI: The question is a little confusing, but how about $f(z) = \dfrac{1}{1-z}$.
The power series of $f$ around $z=0$ is $\sum_{n=0}^\infty z^n$ and this converges on $|z|<1$. The function, however, is analytic on $\mathbb{C}\setminus \{1\}$. |
H: spherically symmetric configurations
$$\Delta S -S +S^3=0$$ How this Differential equation can be written in this form:
\begin{equation}
\frac{d^2S}{d\rho^2}+\frac{D-1}{\rho}\,\frac{dS}{d\rho}
-S+S^3=0
\end{equation}
Which is spherically symmetric configurations. D= dimensions.
Details in the paper equations (21, 44)
AI: If $S$ is speherically symmetric, say $S(x) = S(\rho)$ with $\rho = |x|$, we have
\begin{align*}
\Delta S &= \sum_i \partial_i^2S\\
&= \sum_i \partial_i \bigl(\partial_\rho S\cdot \partial_i\rho\bigr)\\ &= \sum_i \left(\partial_\rho^2 S \cdot (\partial_i \rho)^2 + \partial_\rho S \cdot \partial_i^2\rho\right)
\end{align*}
Now
$$ \partial_i\rho(x) = \frac{x_i}{\rho(x)} $$
and hence
$$ \partial_i^2\rho(x) = \frac{\rho(x) - x_i\frac{x_i}{\rho(x)}}{\rho^2(x)} = \frac{\rho^2(x) - x_i^2}{\rho^3(x)} $$
So $$\sum_i (\partial_i \rho)^2 = \frac{1}{\rho^2(x)}\sum_i x_i^2 = 1$$
and
$$ \sum_i \partial_i^2\rho = \frac{D\rho^2 - \sum_i x_i^2}{\rho^3} = \frac{D-1}{\rho} $$
and hence
$$ \Delta S = \partial_\rho^2 S + \frac{D-1}\rho \cdot \partial_\rho S $$ |
H: Notation of Planes
I have to finde the line $L := E_0 \cap E_1$, with
$E_0 = \left\langle \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} \right\rangle ^\perp$
$E_1 = \left\langle \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix} \right\rangle ^\perp + \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$
But I am unfamiliar with the notation of $E_0, E_1$. I know the notations which can be found in this wiki article.
Does anyone know this notation, and can point me into the right direction?
AI: According to the definition, if $W\subset V$ which $V$ is a proper vector space, then $$W^\perp=\{v\in V\mid \langle v,w\rangle=0, ~\forall w\in W\}$$ Here $\langle v,w\rangle$ means the inner product of two vectors. For example, if $W=\{(0,0,z)\mid z\in\mathbb R\}\in\mathbb R^3$ be our usual $z-$ axes in $\mathbb R^3$, then we can find out that
$$W^\perp=\{(a,b,0)\mid~~a,b\in\mathbb R\}$$
Note that $W^\perp$ is a subspace of $V$. |
H: Velleman's Proof Designer problem 44
This is an exercise I didn't solve from the group of exercises present on Velleman's site that should be solved using Proof Designer (the little program that comes with the book).
Theorem:
If $\forall \mathcal{F} (\cup \mathcal{F} = A \rightarrow A \in \mathcal{F})$, then $\exists x (A=\{x\})$.
Before addressing my line of reasoning and the tentative proof, just few words on this specific problem. I find it interesting because it shows me that I still have problems figuring (and writing) properly existence results. About this, I would say that no matter what book on proof writing you take, this point is usually quickly considered and the examples are trivial, so not particularly useful for real mathematics. Probably this is related to the fact that existence theorems are in general the "hard ones" to be proven and to be thaught (I guess that they are the most difficult to be grasp by students), but I really don't know and I would like to know the opinions of experts.
Line of Reasoning:
I think the idea is that $x=\emptyset$ and $A=\{\emptyset\})$. In this way, what we have to assume should stand. Indeed, the antecedent should "bind" the nature of the family of sets and the consequence would follow.
Tentative Proof:
Let x be the empty set $\emptyset$. Thus, $\forall \mathcal{F} (\cup \mathcal{F} = \{\emptyset\} \rightarrow \{\emptyset\} \in \mathcal{F})$. Take $\mathcal{F}=\{\emptyset,\{\emptyset\}\}$. Then, we have $\cup \{\emptyset,\{\emptyset\}\} = \{\emptyset\} \rightarrow \{\emptyset\} \in \{\emptyset,\{\emptyset\}\}$, and the result is established.
PS: As I pointed out in another post, in general I try to be more consistent grammatically speaking when I write down proofs, and I avoid the use of logical symbols. I tend to do it when I have doubts about the content of the problem and here... I have no clue at all, so forgive it. Any feedback is more than welcome.
AI: You are still quite far from proving the theorem. The issue seems to be that you misunderstood what the theorem is saying. I will try to explain it with a minimum of logical symbols, as you said you prefer that.
We are given a set $A$. First, let us investigate the premise:
$$\forall \mathcal F: \cup \mathcal F = A \to A \in \mathcal F$$
One could say that this expresses that $A$ is "union-irreducible": when it is the union of any collection of sets $\mathcal F$ (i.e. $\cup\mathcal F = A$), then it must be the case that one of these sets in $\mathcal F$ already is $A$.
It is important that we have the universal quantifier $\forall$ here, because we want to say that $A$ is "irreducible" under all unions, not just some particular ones.
Now let us turn to the conclusion:
$$\exists x: A = \{x\}$$
It does not say anything but: $A$ has precisely one element (formally: "there is an $x$ such that $x$ is the only element of $A$).
Thus, the statement of the theorem becomes: Any union-irreducible set $A$ consists of precisely one element.
In particular, we are not allowed to pick a set $A$, or, equivalently, the element $x \in A$. The theorem has to be proved for all union-irreducible sets.
For completeness, I will indicate a proof of the theorem. Let $A$ be a union-irreducible set.
We have the set $\mathcal F = \{\{x\}:x \in A\}$, comprising a singleton $\{x\}$ for each $x \in A$. It is trivial to see that $\cup\mathcal F = A$.
So now we need the conclusion to hold, i.e. $A \in \mathcal F$. But this means that there must exist an $x\in A$ such that $A = \{x\}$. The theorem follows. |
H: Some questions about complexification of a real vector space
Could you tell me how to prove that if $f:U \rightarrow U$ is $\mathbb{R}$-linear, then:
1) $U^{\mathbb{C}}$ is a vector space over $\mathbb{R}$ (should I check all eight conditions for a vector space?)
2) $f^{\mathbb{C}}: U^{\mathbb{C}} \rightarrow U^{\mathbb{C}}$ is the only $\mathbb{C}$ - linear mapping such that $f^{\mathbb{C}}|_U = f$
$U^{\mathbb{C}} = U \times U$ - it's a vector space over $\mathbb{C}$
$(a,b) + (c,d) = (a+c, b+d)$
$(a+bi)(c,d) = (ac-bd, ad+bc)$
$f^{\mathbb{C}}(a+ib) = f(a) + if(b)$
AI: For 1): As you know that $U^{\mathbb C}$ is a $\mathbb C$-vector space, think if restriction of the multiplication $\mathbb C \times U^{\mathbb C} \to U^{\mathbb C}$ to $\mathbb R\times U^{\mathbb C}$ can violate any condition.
For 2): Suppose $\def\C{\mathbb C}$$f^\C\colon U^\C \to U^\C$ if $\C$-linear with $f^\C|_U = f$, then we have for $x,y \in U$:
\begin{align*}
f^\C(x,y) &= f^\C\bigl((x,0) + i\cdot(y,0)\bigr)\\
&= f^\C(x,0) + if^\C(y,0)\\
&= \bigl(f(x),0\bigr)+i\cdot \bigl(f(y),0\bigr)\\
&= \bigl(f(x), f(y)\bigr)
\end{align*}
so there is at most one $\C$-linear extension of $f$, to prove existence, show that $f^\C(x,y) :=\bigl(f(x), f(y)\bigr)$ is $\C$-linear. |
H: On the equality of two generating functions related to plane partitions
I'd like to prove
$$\prod_{(i,j,k)\in\mathcal{B}(r,s,t)}\frac{1-q^{i+j+k-1}}{1-q^{i+j+k-2}}=\prod_{i=1}^r\prod_{j=1}^s\frac{1-q^{i+j+t-1}}{1-q^{i+j-1}},$$
where
$$\mathcal{B}(r,s,t)=\{(i,j,k):1\leq i\leq r,1\leq j\leq s,1\leq k\leq t\}.$$
I'm guessing that we could find a bijection between products of terms from the left-hand side and terms from the right-hand side, but how I am not sure how. Of course there might be something easy/obvious that I am missing. Any help or general hints on how to approach problems like this would be appreciated!
AI: The left hand side is the same as
$$\prod_{i=1}^r\prod_{j=1}^s\prod_{k=1}^t\frac{1-q^{i+j+k-1}}{1-q^{i+j+k-2}}$$
Now just consider
$$\prod_{k=1}^t\frac{1-q^{i+j+k-1}}{1-q^{i+j+k-2}}=\frac{1-q^{i+j+t-1}}{1-q^{i+j+1-2}}$$
because a lot of the factors cancel each other out. |
H: Minimal polynomials over the rationals and the reals
Find the minimal polynomial over $\mathbb Q$ and $\mathbb R$ for ...$\sqrt[3]{3}$, $1- i\sqrt{3}$, $2 + i$, $i\sqrt[3]{3}$
Sorry for my sqrt formulas .. I'm new here, hope to learn really fast to write a correct question in math symbols.
I have to find both minimal polynomial over $\mathbb Q$, as well over $\mathbb R$ ... It's kind of homework ... Any good explanations will be nice and very appreciated. Thanks
AI: I'll do the first one for you as hint.
Let $\alpha = \sqrt[3]{3}$
Let's get rid of that pesky radical by cubing both sides:
$\alpha^3 = 3$
Then move 3 over to the other side:
$\alpha^3 - 3 = 0$
Let $f = x^3 - 3$. Now f is irreducible in $\mathbb Q[x]$ by Eisenstein's Criterion (p=3) and Gauss' lemma. So the minimal polynomial for $\sqrt[3]{3}$ over $\mathbb Q$ is $m_\sqrt[3]{3} = x^3 - 3$.
The rest follow a very similar procedure. |
H: Question about eigenvalues
I have this :
i dont understand why they write $\lambda=m^2 , m\in \mathbb{N}\cup\lbrace0\rbrace$ ,
it's right that $\lambda=m^2$ is the eigenvalues of $(P_0)$ ,but $0$ is not an eigenvalue !.
after that why they write $\mathbb{N}\cup \lbrace0\rbrace$ 0 already belongs to $\mathbb{N}$ !
Please, help me
Thank you.
AI: Zero is an eigenvalue. Let $x(t)=1$ to see this.
Whether or not zero is a natural number depends on your definition of $\mathbb N$ which is not standardized. Either convention can be used. |
H: Which polygon tile grids allow convex polygons to be formed from multiple tiles?
If I have a grid made of equilateral triangles, I can easily form larger convex polygons as a set of tiles in that grid. I believe this holds for some (but not all) tilings of non-equilateral triangles.
The same for quadrilaterals - it's obvious for squares, rectangles and parallelograms, but I believe it also holds for some other (but not all) tilings of quadrilaterals.
I'm pretty sure tiling a flat area purely with convex pentagons is impossible, so skip that case.
With hexagons, a tiled grid is possible, but the only way to make a convex polygon from those convex hexagonal tiles seems to be to use only one tile - no multi-tile convex polygons seem possible.
My speculation is that it's only possible to form a multi-tile convex polygon from convex polygon tiles if all those tiles have interior angles at all vertices of 90 degrees or less, at least for those vertices that are at a vertex or edge of that larger convex polygon.
Is that speculation correct? Is there a proof or disproof?
AI: Sorry - I figured it out within a minute of posting. Should have thought longer before asking.
If two polygons are joined such that such that a vertex and edge are on the same point and line, there are two cases...
If the sum of the two interior angles is greater than 180 degrees, this is a concavity. It's only valid as part of a larger convex polygon if that concavity is filled by more tiles.
If the sum of the two interior angles is less than or equal to 180 degrees, there is no concavity. This vertex may be at the vertex or edge of a larger convex polygon formed from these tiles.
So the issue is the sum of the angles, not the interior angle at the vertex of one particular tile. |
H: Taylor series of $f(x)=\frac {e^x-1}{x}$
I am asked to expand $f(x)=\frac {e^x-1}{x}$ centered at 0 using the known Talyor series of functions.
How to simplify the function so that it can be expanded more easily?
AI: If you expand $e^x$ and subtract 1, then you get something divisible by $x$. You should find
$$\displaystyle\sum_{k=0}^{+\infty}\frac{x^k}{(k+1)!}$$ |
H: Applications of Double/Triple Integrals
This is the question that I need to solve using mathematica:
The concentration of an air pollutant at a point $(x,y,z)$ is given by: $$p(x,y,z) = x^2y^4z^3 \text{ particles}/m^3$$ We're interested in studying the air quality in a region in 3-space which satisfies $x\ge0$ and is bounded by the $xy$-plane, the surface $z=\sqrt{4-x^2-y^2}$, the plane $y=-2x$, and the $xz$-plane. Find the total amount of pollutant in this region.
I don't want a full answer to this question, obviously (I need to work it out myself), but I'm just having trouble getting started. How do I use the bounds here to set up an integral for the problem? What exactly do the bounds mean/represent in the first place?
AI: It's a bit of a puzzle to work out the region of integration and a nice way to express it using limits of integration. A priori the fact that a bunch of "bounds" are thrown out by a problem does not tell us whether the region is actually finite (a finite volume in this case), and even if so you might not be able to package up the region with a single set of nested integral signs and their associated limits of integration.
Without a picture to guide us, I'd probably start from the beginning of the list, assuming some good faith on the part of the problem's constructor. Now $x \ge 0$ is an easy one to express, by itself. If we put $x$ as the variable for the outer integration, then the two nested integrals can have limits that depend on the $x$ value referenced in the outer integration.
"Bounded by the $xy$-plane" is a little ambiguous. The $xy$-plane is where $z=0$, so as a boundary this means either restricting to $z \ge 0$ or $z \le 0$, one side or the other. Let's keep an open mind as we parse the remaining pieces of the boundary puzzle.
Aha! The surface $z = \sqrt{4 - x^2 - y^2}$ clarifies things. For one, the square root sign here explicitly means nonnegative values of $z$ on the boundary, so evidently it's the upper half space $z \ge 0$ we need. Also this is a hemisphere, half the surface of a ball, so it does restrict integration to a finite volume.
I'll let you pursue the remaining pieces of the puzzle, but what we ultimately hope to do is express the region using nested limits of integration something like this:
$$ \int_0^c \int_{f(x)}^{g(x)} \int_{u(x,y)}^{v(x,y)} p(x,y,z) dz dy dx $$
where the functional notation I've used should be replaced by some expressions, and I've assumed $x$ for the outer integration, $z$ for the innermost integration.
I've left you a particularly tricky point to work out, perhaps with the aid of drawing a picture. The problem says $y = -2x$ is part of the boundary, but which side of this plane is the region on? Is $y$ supposed to be above or below $-2x$? The final boundary piece is described as the $xz$-plane, which is where $y = 0$. How does that fit with the other pieces?
If the order of integration were as I've suggested above, we'd have to find a constant $c$ that is the maximum of the range for $x$, and I've filled in $0$ as the minimum of that range. The limits of integration on $y$ can then depend on the particular value of $x$, and further the limits of $z$ on both $x$ and $y$.
Evaluating the integral is then a matter of working from the innermost one out, doing the integration with respect to $x$ (or so I've assumed) last. |
H: How to resolve this algebra equation?
$$f = X^3 - 12X + 8$$ $a $- complex number, $a$ is a root for $f$
$b = a^2/2 - 4 $.
Show that $f(b) = 0$
This is one of my theme exercises ... Some explanations will be appreciated ! Thank you all for your time .
AI: \begin{align*}
f(b) &= f(\frac{a^2}{2} - 4) \\
&= (\frac{a^2}{2} - 4)^3 - 12(\frac{a^2}{2} - 4) + 8 \\
&= \frac{a^6}{8} - 3a^4 + 24 a^2 - 64 - 6a^2 + 48 + 8 \\
&= \frac{a^6}{8} -3a^4 + 18a^2 -8
\end{align*}
On the other hand,
$$
f(a) = a^3 - 12a + 8 = 0
$$
and therefore
\begin{align*}
0 = 0^2 &= (a^3 - 12a + 8)^2 \\
&= a^6 - 24a^4 + 16a^3 + 144 a^2 - 192a + 64 \\
&= (a^6 - 24a^4 + 144 a^2 - 64) + (16a^3 - 192a + 128) \\
&= 8(\frac{a^6}{8} -3a^4 + 18a^2 -8) + 16(a^3 - 12a + 8)\\
&= 8f(b) + 16f(a) \\
&= 8f(b)
\end{align*}
Hence,
$$
f(b)=0.
$$ |
H: proving that symplectic lie algebra is a subalgebra of GL
Suppose $S$ is n by n matrix over a field F. Define
$gl_S(n,F)=\{A \in gl(n,F): SA+A^TS= 0\}$
Show that this is a subalgebra of $gl(n,F)$
I get as far as:
$A \in gl_S(n,F)$ and $B \in gl_S(n,F)$
$S[AB] +[AB]^T S =SAB -SBA + (AB)^TS-(BA)^TS $
$=SAB + B^TA^TS - (SBA + A^TB^TS)$
I guess it's supposed to equal to zero somehow.
AI: Since $SA=-A^TS$ and $SB=-B^TS$,
$$
S[A, B]=SAB-SBA=-A^T SB+B^T SA=A^T B^T S-B^TA^TS=[A^T, B^T] S=-[A, B]^TS.
$$ |
H: If $x^2\equiv 1 \pmod{n}$ and $x \not\equiv \pm 1 \pmod{n}$, then either $\gcd(x-1,n)$ or $\gcd(x+1,n)$ is a nontrivial factor of n
I'm reading elementary number theory and trying to understand the following problem: If $x^2\equiv 1 \pmod{n}$, $n=pq$, $p$ and $q$ are odd primes and $x \not\equiv \pm 1 \pmod{n}$, then either $\gcd(x-1,n)$ or $\gcd(x+1,n)$ is a nontrivial factor of $n$.
EDIT: Andreas Caranti wrote an updated, corrected version of my problem, so I wrote the definition again.
AI: I assume your primes are odd and distinct. Also, your statement is not correct as written, you have to add $x \not\equiv \pm 1 \pmod{n}$. The latter assumption implies that $\gcd(x-1, n), \gcd(x+1, n) < n$.
You have $n \mid x^{2} - 1 = (x-1)(x+1)$. Now if $\gcd(x - 1, n) = 1$, then $n$ divides $x + 1$, a contradiction, as we have taken $x \not\equiv -1 \pmod{n}$. So $\gcd(x - 1, n) > 1$. Similarly, $\gcd(x + 1, n) > 1$.
It follows that $\gcd(x-1,n)$ and $\gcd(x+1, n)$ are nontrivial, proper divisors of $n$, hence $p$ or $q$. |
H: Unitary and transformation matrix
I have a question that I do not understand how to solve that.
Let $V$ be inner product space.
Let {$e_{1},...,e_{n}$} an orthonormal basis for $V$
Let {$z_{1},...,z_{n}$} an orthonormal basis for $V$
I have to show that the matrix represents
the transformation matrix between {$e_{1},...,e_{n}$} to
{$z_{1},...,z_{n}$} is unitary.
How do I do it ?
I got no clue!!
AI: I'd sketch a proof as follows. Regard the vectors in the orthonormal bases $\left(e_{1},...,e_{n}\right)$ and $\left(z_{1},...,z_{n}\right)$ as column vectors. Then let $\mathbf{U}_e$ and $\mathbf{U}_z$ be the matrices where the rows are the transposes of the column vectors $e_{1},...,e_{n}$ and $z_{1},...,z_{n}$ respectively. Then both $\mathbf{U}_e$ and $\mathbf{U}_z$ are unitary, and the matrix which maps between $\left(e_{1},...,e_{n}\right)$ and $\left(z_{1},...,z_{n}\right)$ is going to be given by $\mathbf{U}_e\mathbf{U}_z^{-1}$, which will be unitary because $\mathbf{U}_e$ and $\mathbf{U}_z$ are unitary. |
H: Parametric simultaneous equations
I stumbled on this one a few days ago and I'm probably missing something obvious...
I basically need to solve those parametric equations for the other coordinate $(x,y)$ other than the point $(t^2,t^3)$:
$$2y = 3tx - t^3$$
$$x = t^2$$
$$y = t^3$$
To have more context, the question started with $x = t^2$, $y = t^3$, asking to find the equation of the tangent at the point $(t^2,t^3)$, then to find the other point where this tangent meets the parametric curve.
It just seems that whenever I try something, I get back to $(t^2,t^3)$ instead of the other point. I've been trying with:
$2(t^3) = 3tx - t^3$, then
$2y = 3t(t^2) - t^3$, then
$2\sqrt{x^3} = 3tx - t^3$ (but this one is intimidating for something which should be much simpler than that)
Is there perhaps a way to exclude $(t^2,t^3)$ to get the other solution? The other point should be $\left(\dfrac{t^2}{4},-\dfrac{t^3}{8}\right)$ by the way.
AI: I think your problem is that the $t$ in the first equation should be fixed, whereas the $t$ in the second and third is actually your parameter.
The tangent line to $(t^2,t^3)$ at the point $s$ is indeed given by $(s^2,s^3)+\lambda(2s,3s^2)$ for $\lambda\in\mathbb{R}$.
This line intersects $(t^2,t^3)$ whenever $t^2 = s^2+2\lambda s$ and $t^3=s^3+3\lambda s^2$. First we can eliminate $\lambda$ to obtain $ 2t^3-3st^2+s^3=0$. Since we already know the solution $t=s$, we may factor it out:
$$2t^3-3st^2+s^3 = (t-s)(2t^2-st-s^2)=0.$$
Now we may solve the resulting quadratic formula by completing the square $$2t^2-st-s^2 = 2\left(t-\frac{s}{4}\right)^2-\frac{9}{8}s^2=0.$$
Thus $t = \frac{s}{4} \pm\frac{3}{4}s$. Thus $t=s$ or $t=-\frac{1}{2}s$.
Hence, the non-trivial intersection point is at $\left(\frac{1}{4}s^2,-\frac{1}{8}s^3\right)$. |
H: $f(x)=|\cos x|+|\sin(2-x)|$ at which of the following point $f$ is not differentiable?
$f(x)=|\cos x|+|\sin(2-x)|$ at which of the following point $f$ is not differentiable?
1.$\{(2n+1){\pi\over2}\}$
2.$\{n\pi\}$
3.$\{{n\pi\over 2}\}$
4.$\{n\pi+2\}$
in all cases $n\in\mathbb{Z}$
well, is there any easy trick to solve this type of probelem in an competitive examination? Do I need to pick $\pi \over 2$ from $1$ and cheked that $f$ is differentiable and same trick from each set? in some what I am lost.
AI: A function isn't differentiable where it has sharp corners since the tangent line at that point is not well-defined. In this case, it fails to be differentiable when $\cos(x)$ and $\sin(2-x)$ change sign since the absolute value of a function has a sharp cusp when its argument changes sign. |
H: $X=(-\infty,0]\cup\left\{{1\over n}:n\in\mathbb N\right\}$ with subspace topology Then
$X=(-\infty,0]\cup\left\{{1\over n}:n\in\mathbb N\right\}$ with subspace topology. Then
$0$ is an isolated point
$(-2,0]$ is an open set
$0$ is a limit point of the subset $\left\{{1\over n}:n\in\mathbb N\right\}$
$(-2,0)$ is open set.
$0$ is not an isolated point of $X$ as every nbd of $0$ contains a point of the form $1\over n$ so $3$ is true and $1$ is false, $2$ is false as $0$ is not interior point, $4$ is true as every point is interior point. Am i right?
AI: You are right.
3) is true as every nbd of 0 contains a point of the form $\frac 1n$.
4) is true as for any points $x\in (0,2)$, there exists an open set $U$ of $\mathbb R$ such that $x \in U \cap X \subseteq (0,2)$. |
H: Applying Urysohn Lemma on $\mathbb{R^2}$
$A_1=\text{ Closed Unit Disk}$, $A_2=\{(1,y):y\in\mathbb{R}\}$, $A_3=\{(0,2)\}$.
Then there always exists a real-valued continuous function on $\mathbb{R}^2$ such that
$f(x)=a_j$ for $x\in A_j$, $j=1,2,3$
Iff at least two of the numbers $a_1,a_2,a_3$ are equal.
if $a_1=a_2=a_3$
$\forall a_i\in\mathbb{R}$
iff $a_1=a_2$
As all three sets are closed and we are in normal space, we can apply Urysohn's lemma to ensure the existence of such continuous function, so I see that as $A_1$ and $A_2$ are not disjoint, so $4$ is true, am I right?
AI: You are right. Note that 2. is also true. |
H: For every monoid $M$ with zero is there a group $G$ such that $\mathrm{Grp}(G,G)\cong M$?
All monoids that I will consider here have identities. A monoid $M$ is said to have a zero iff $\exists z\in M \forall x\in M (zx=xz=z)$.
Let $M$ be any monoid with a zero. Must there exist a group $G$ such that $\mathrm{Grp}(G,G)\cong M$ ?
I know that there is no reason to suppose that there must exist such a group. I also expect that there will exist a counterexample, however the fact that I can't find one is irritating me.
Thank you
AI: The group of invertible elements of $\operatorname{Grp}(G,G)$ contains the automorphism group of $G$, and in particular the group of inner automorphisms, which is isomorphic to $G/Z(G)$.
So take the monoid $\{ z \} \cup C_{p}$, where $p > 2$ is a prime, $C_{p}$ is a multiplicative cyclic group of order $p$, and $z$ is the zero.
So $G/Z(G)$ is isomorphic to a subgroup of $C_{p}$, and thus cyclic. Therefore $G = Z(G)$ is abelian. Then $\operatorname{Grp}(G,G)$ is nothing else but the (multiplicative part of the) endomorphism ring of $G$, and the automorphism group of $G$ has odd order $p$.
Now in any abelian group inversion is an automorphism, so inversion must be trivial here, that is, $G$ has exponent $2$. Clearly $\lvert G \rvert = 2$ is impossible, and if $\lvert G \rvert > 2$, then $G$ is a vector space of dimension $\ge 2$ over the field with two elements, and thus swapping two basis elements would be an automorphism of order $2$, a contradiction. |
H: Nolinear system of equations.
$$
\left\{\begin{array}{l}
x^y = y^x \\
x-y\cdot\log_xy=(x+y)\cdot\log_xy
\end{array}
\right.
$$
Thanks for your time!
AI: With rules for logarithms the second equation translates to
$$
x - y\log_xy = \log_x y^x + y \log_x y
$$
and using the first equation this simplifies to
$$
x= y+2y (1+\log_xy)
$$
and then
$$
\log_xy = \frac{x-y}{2y}
$$
From first equation you can express
$$
y = x^{\frac{y}{x}}
$$
implying
$$
\frac{y}{x} = \frac{x-y}{2y}
$$
which translates to
$$
x^2 -xy - 2y^2 = 0
$$
and factoring this you get
$$
(x+y)(x-2y)=0
$$
that gives $2$ solutions which, as usual with equation with logarithms, need to be checked. Try. |
H: Vector Line Integral Question
I need to compute the line integral for the vector $\vec{F} = \langle x^2,xy\rangle$, for the curve specified: part of circle $x^2+y^2=9$ with $x \le0,y \ge 0$,oriented clockwise.
Once again, I'm stuck at the setup (this happens a lot with me). I know that I need to parameterize F, but how would I go about doing this exactly?
AI: We have $$F(x,y)=x^2\textbf{i}+xy\textbf{j}$$ and $$C: x^2+y^2=1, x\le0, y\ge0$$ and want to evaluate $$\oint_CF\cdot dr=\int_{\pi}^{\pi/2} F(\cos t,\sin t)\cdot(-\sin t,\cos t)dt=\int_{\pi}^{\pi/2}(\cos^2 t,\sin t\cos t)\cdot(-\sin t,\cos t)dt\\\\\\ =\int_{\pi}^{\pi/2}(-\sin t\cos^2 t+\sin t\cos^2 t)dt=0$$ |
H: is $(x+1)^4-x^4$ non-prime for all natural positive integers $x$
Looking at difference between two neighbouring positive integers raised to the power 4, I found that all differences for integer neighbours up to $(999,1000)$ are non-prime.
Does this goes for all positive integers?
And can someone please prove?
AI: Hint : you can write $(x+1)^4-x^4=((x+1)^2-x^2)((x+1)^2+x^2)=(x+1-x)(x+1+x)((x+1)^2+x^2)=(2x+1)((x+1)^2+x^2)$ |
H: What's the motivation of the ideal?
I'm reading a book on Algebra, it introduces the concept of ring after some examples, the concept of ideal.
Definition I.1.8. Let $(A,+,\cdot)$ be a ring and $I$ a non-empty subset of $A$. We say that $I$ is a ideal if:
$x+y\in I,\;\;\;\;\forall \; x,y \in I$
$ax\in I, \;\;\;\;\forall \;x\in I, \;\;\;\;\forall a\in A$
I can't understand the motivation of the ideal: The sum of any elements of $I$ are elements of $I$ and the product of any element of $I$ with any element of $A$ is an element of $I$. I'm confused: What's that useful for?
AI: The notion of ideal comes from a generalization of modular arithmetic. It is a refinement (by Dedekind) of Kummer's notion of ideal number, which arose from attempts to prove Fermat's Last Theorem (or some special cases).
When you have a number $n\in {\bf Z}$, then the multiples of $n$ form an ideal in ${\bf Z}$ (a principal ideal denoted by $(n)$ or $n{\bf Z}$). The ring ${\bf Z}_n$ of integers modulo $n$ is then actually the quotient ring ${\bf Z}/n{\bf Z}$. If $p$ is prime, then $p{\bf Z}$ is a prime (even maximal) ideal and ${\bf Z}_p={\bf Z}/p{\bf Z}$ is a domain (even a field).
Similarly, if you have any ring $R$ and an ideal $I\unlhd R$, then $R/I$ makes sense and has a natural ring structure, which is a domain iff $I$ is prime and a field iff $I$ is maximal.
For Dedekind domains, there is a further analogy: if $R$ is a Dedekind domain (for example, the integer subring of a number field), then any nonzero ideal $I\unlhd R$ factors uniquely into a product of prime ideals and nonzero prime ideals are exactly the maximal ideals.
In ${\bf Z}$, this corresponds to the fact that any natural number decomposes uniquely into a product of prime numbers (as ${\bf Z}$ is a principal ideal domain, every ideal is principal). |
H: Does it make sense to talk about the concatenation of infinite series?
Does a series of numbers defined as the concatenation of two or more infinite series, for example all the positive integers followed by all the negative integers, make mathematical sense?
I came up with this problem while writing an implementation of infinite series in Erlang. I'm not sure how to represent this in code in a way that isn't horrific, and wondered if it even made any sense. I'm a programmer, not a mathematician, so won't understand any answers too steeped in technical terms!
EDIT: Ok, After reading and being confused by a few answers I have discovered that the mathematical definition of concatenation appears to be a bit different to my understanding of it from programming.
When I say "the concatenation of two infinite series", what I actually mean is a new series that consists of all the elements of the first series followed by all the elements of the second series.
So for the example I gave above, the positive integers followed by the negative integers, you could start at the beginning of the series and get only positive numbers, never in fact reaching the second infinite series.
However, if you then applied some sort of 'filter' to this concatenated (in my definition) infinite series, that would somehow make the first part of it finite, then it would be possible to count up to elements of the second series.
I hope this makes more sense now, and if there is a more appropriate mathematical term I can use instead of concatenation, please let me know. Ta!
AI: The answer is, it depends ;-).
It certainly does make sense to talk about the concatenation of two infinite series. If you view an infinite series as a function $f \,:\, \mathbb{N} \to A$ which assigns to each number one element of $A$, then you can view the concatenation of two such series $f_1$, $f_2$ as function $g \,:\, \{1,2\}\times\mathbb{N} \to A$, where $g(1,n) = f_1(n)$ and $g(2,n) = f_2(n)$.
The resulting $g$, however, isn't an infinite series itself, since it doesn't map integers to elements of $A$, but rather pairs of integers.
If you want the result of such a concatenation to be an infinite series itself, you'll have to use a more general definition of what constitutes an infinite series. A way to do that is to use ordinal numbers (http://en.wikipedia.org/wiki/Ordinal_number). Using that, you could define an infinite series of length $\alpha$ to be a mapping $$
s \,:\, \{\beta \,:\, \text{$\beta$ is an ordinal and }\beta < \alpha\} \to A \text{.}
$$
The original definition of an infinite series would then conincide with this definition if you pick the length $\omega$, i.e. the smallest ordinal larger than all finite ordinals. For $g$, i.e. two concatenation of two series with length $\omega$, you'd have to pick $\alpha = \omega + \omega$. In other words, those would have length $\omega + \omega$.
Ordinals are often constructed in a way that defines an ordinal $\alpha$ to simply be the (set-theoretic) union of all smaller ordinals. That allows you to simplify the definition of a series of length $\alpha$ to just $$
s \,:\, \alpha \to A \text{.}
$$
You'll have to decide how far (in terms of ordinal sizes) you want to be able to take this. If you only allow finitely many concatenation steps of series of length $\omega$, then all the ordinals you'll encounter will be smaller than $\omega\cdot\omega$. If you allow $\omega$-many concatenations of series of length $\omega$, one such step reaches $\omega\cdot\omega = \omega^2$, the next one $\omega^3$ and so on. After $\omega$-many such steps you reach $\omega^\omega$. A natural limit may thus be the ordinal $\epsilon_0$ (http://en.wikipedia.org/wiki/%CE%95%E2%82%80), which cannot be expressed as a cantor normal form (http://en.wikipedia.org/wiki/Ordinal_arithmetic#Cantor_normal_form) of smaller ordinals. By limiting yourself to ordinals smaller than that, you're assures that any ordinals you encounter than be written as the sum of finitely many powers of $\omega$, where the exponents obey the same restriction. It should be possible to represent such ordinals quite efficiently in a program, though I've never tried. But something tells me there's a ton on literature on that... |
H: Does the Weierstrass M-test show analyticity?
I'm trying to show (textbook exercise) that the riemann-zeta function is analytic. The solution is here:
Why does the proof say that the zeta series converges to an analytic function? Doesn't the M-test merely show uniform convergence? The zeta series (whose term is inside the first modulus in the above solution) isn't a power series, so I can't argue that convergence implies analyticity either.
AI: The limit of a sequence of analytic functions that converges uniformly is an analytic function.
This can be proved by combining uniform convergence with Morera's theorem because uniform convergence allows you to switch limit and integral.
http://en.wikipedia.org/wiki/Morera%27s_theorem#Uniform_limits |
H: Prove that $f(f(x))=x$ has no roots .... $f$ having a general form
This problem gave me some headache, especially because $f$ have its own general form :
let $f(x) = ax^2 + bx + c$. Suppose that $f(x) = x$ has no real roots.
Show that equation $f(f(x))=x$ has also no real roots.
AI: Consider $g(x) = f(x)-x$. If $f$ is continuous, then $g$ will be continuous as well. A continuous function without zeroes is either strictly positive or strictly negative. This means that either $f(x)>x$ for all $x$ or $f(x)<x$ for all $x$.
Hence either $f(f(x))<f(x)<x$ for all $x$ or $f(f(x))>f(x)>x$ for all $x$. Either way, $f(f(x))\neq x$ for all $x$. |
H: Improper Integral Question $ \int^{\infty}_0 \frac{\mathrm dx}{1+e^{2x}}$
I want to check if it's improper integral or not
$$ \int^{\infty}_0 \frac{\mathrm dx}{1+e^{2x}}.$$
What I did so far is :
set $t=e^{x} \rightarrow \mathrm dt=e^x\mathrm dx \rightarrow \frac{\mathrm dt}{t}=dx
$ so the new integral is:
$$ \int^{\infty}_0 \frac{\mathrm dt}{t(1+t^2)} = \int^{\infty}_0 \frac{\mathrm dt}{t}-\frac{\mathrm dt}{1+t^{2}}$$
now how I calculate the improper integral, I need to right the $F(x)$ of this integral and then to check the limit?
Thanks!
AI: You made a couple of mistakes. Firstly, you forgot to change the limits of the integration, so your integral is actually $\displaystyle\int_1^\infty \frac{\mathrm{d}t}{t(1+t^2)}$. Furthermore, $\frac{1}{t(1+t^2)} \neq \frac{1}{t}-\frac{1}{1+t^2}$. Rather $\frac{1}{t(1+t^2)} = \frac{1}{t}-\frac{t}{1+t^2}$.
Hence your integral becomes $\displaystyle\int_1^\infty \frac{1}{t}-\frac{t}{1+t^2}\,\mathrm{d}t = \left[\log(t)-\frac{1}{2}\log(1+t^2)\right]_1^\infty = \left[\frac{1}{2}\log\left(\frac{t^2}{1+t^2}\right)\right]_1^\infty$
$$ = \frac{1}{2}\left[\log(1)-\log\left(\frac{1}{2}\right)\right] = \frac{1}{2}\log(2).$$ |
H: Linear equation of 4 variables
I'm stuck on this Math problem :
How many solutions does the equation
$x_{1} + x_{2} + 3x_{3} + x_{4} = k$
have, where $k$ and the $x_{i}$ are non-negative integers such that $x_{1} \geq 1$, $x_{2} \leq 2$, $x_{3} \leq 1$
and $x_{4}$ is a multiple of 6.
I tried to write the possible cases for $x_{2}$ and $x_{3}$ since they are bounded.
Case $x_{2} = 2$ :
Case $x_{3} = 1$ : $x_{1} + 5 + x_{4} = k$
Case $x_{3} = 0$ : $x_{1} + 2 + x_{4} = k$
Case $x_{2} = 1$ :
Case $x_{3} = 1$ : $x_{1} + 4 + x_{4} = k$
Case $x_{3} = 0$ : $x_{1} + 1 + x_{4} = k$
Case $x_{2} = 0$ :
Case $x_{3} = 1$ : $x_{1} + 3 + x_{4} = k$
Case $x_{3} = 0$ : $x_{1} + x_{4} = k$
But now, I'm stuck. Should I try to resolve all this equations ? Am I in the right direction ? Any help would be grealty appreciated.
EDIT :
I found that the number of solutions will be given by the coefficient $a_{k}$ of $x_{k}$, i.e :
$(x^0+x^1+x^2)(x^0+x^1)(\displaystyle\sum\limits_{k=1}^\infty x^k)(\displaystyle\sum\limits_{k=0}^\infty x^{6k})$
$=(x^0+x^1+x^2)(x^0+x^1)(\frac{1}{(1-x^3)(1+x^3)})(\frac{x}{1-x})$
$=\frac{x+2x^2+2x^3+x^4}{(x^3-1)(x^3+1)(x-1)}$
Now I don't know how to find the $a_{k}$ coefficient.
AI: This will give you a start: it's the coefficient of $x^k$ in $$(x+x^2+x^3+\dots)(1+x+x^2)(1+x^3)(1+x^6+x^{12}+\dots)$$ |
H: Determine conjugate function
Let $f:\mathbb{R}\to\mathbb{R}$ defined by $f(x)=e^x$. Determine $f^*(y)$.
I try to use some inequalities to get supremum but it is impossible. Seemingly, I must consider some cases of $x$.
AI: $$f^*(y)=\textstyle\sup_x \langle y, x \rangle - e^x.$$
Since the supremum involves a smooth function we can examine the derivative:
$$\frac{d}{dx}(yx-e^x) = y-e^x = 0\quad\Longrightarrow\quad x=\log y~~\text{if}~~y>0.$$
So if $y>0$, at least,
$$f^*(y) = y\log y - y.$$
If $y=0$, then the supremum is $0$; if
$y<0$, then the supremum is $+\infty$. (In both cases, let $x\rightarrow-\infty$.) So the final answer is
$$f^*(y)=\begin{cases} y\log y - y & y > 0 \\ 0 & y = 0 \\ +\infty & y < 0 \end{cases}$$
If you don't like extended-valued functions just drop the $y<0$ case and state that
$$\mathop{\textrm{dom}}(f^*) = \mathbb{R}_+.$$ |
H: Intuition behind closed subsets of a metric space?
Reading for my exam in real analysis, I struggle with the definition of a closed subset of a metric space.
Consider a metric space $$(X,d)$$
Then consider a subset of this space$$F$$
What the book is trying to convince me of is that F is closed iff a convergent sequence in F always has a limit belonging to F.
The intuition that:
$$x_1,\dots,x_n,\dots\in F$$
$${x_n}\rightarrow a $$
where
$$a\in F$$
for all convergent sequences in F implies that F is closed makes sense to me.
What I am struggling with is understanding the converse. What is it about F containing all of its boundary points that makes it impossible to have a sequence in F converging to some limit outside of F (even though the sequence will never get there)?
Edit:
Is the last sentence in the parenthesis actually the reason I am looking for? That it is impossible to get arbitrarily close to some point outside of F, because when you get really close, then you're not necessarily in F at all?
AI: I'm slightly confused about what you think the converse is, and what your definition of a closed set is. You're happy that $$\text{all convergent sequences in $F$ converge to a limit in }F \implies F \text{ is closed}$$ so what you're missing is $$ F \text{ is closed}\implies\text{all convergent sequences in $F$ converge to a limit in }F $$
But "$F$ consisting only of interior points" is equivalent to $F$ being open, which doesn't have anything to do with this statement. Since $F$ closed $\iff$ the complement of $F$ is open, one can argue the following if you want to think about interior points:
Suppose a sequence converges to some point $x\not\in F$. But the complement of $F$ is open, so $x$ is interior to the complement, and there is an open ball $B_x$ which also does not intersect $F$. But then the sequence cannot lie in $F$ since it must eventually lie entirely within $B_x$ since it tends to $x$. Intuitively: there's a neighbourhood of the limit which is stuck outside $F$.
Edit: If instead you want to think in terms of $F$ containing all 'boundary' points or limit points, then the result you want is true by definition! 'Closed' literally means 'closed under taking limits'. |
H: logs Challenge between two students >>be smart
two student were given the equation $2^{4x+6} = 3^{6x-3}$
1.steve rearranged to get $2^{4x+6} - 3^{6x-3} =0$
then wrote $\log (2^{4x+6} - 3^{6x-3}) = \log0$
are these legal steps ? if not explain what is wrong with them
2 Ali wrote $\log( 2^{4x+6}) =\log(3^{6x-3})$ then $\log( 2^{4x+6}) - \log(3^{6x-3})=0 $
finally $2^{4x+6} / 3^{6x-3}$
are these legal steps ? if not explain what is wrong with them
AI: The first is not legal as you cannot take the log of $0$. The second is legal, but I do not see it as progress toward finding $x$. Ali should use the laws of logarithms to pull down the powers. His finally can be reached in one step from the initial equation and does not give a value for $x$. |
H: A problem on square matrices
If $B,C$ are $n$ rowed square matrices and if $A=B+C, BC=CB, C^2=O$, then show that for every $n \in \mathbb N$, $$A^{n+1}=B^n(B+(n+1)C)$$
I tried to prove it using mathematical induction. But I could not get $P(1)$ to be true.
$$P(1): A^2 = B(B+2C)$$
I couldn't equate them.
Please offer some assistance.
Thank you. :)
AI: $$A^2=(B+C)^2=B^2+BC+CB+C^2$$
Now use $CB=BC$ and $C^2=0$. |
H: A matrix problem :)
If $l_i,m_i,n_i$ ; $i=1,2,3$ denote the direction cosines of three mutually perpendicular vectors in space, provided that $AA^T=I$ ,where
$$A=\begin{bmatrix}
l_1 & m_1 & n_1 \\
l_2 & m_2 & n_2 \\
l_3 & m_3 & n_3 \\
\end{bmatrix} $$
I couldn't quite understand the question. Sorry for not posting 'how I tried to solve the answer'.
Please provide assistance.
Thank you. :)
AI: Hint: Consider first $$A^TA= \begin{bmatrix}l_1 & l_2 & l_3 \\m_1 & m_2 & m_3 \\n_1 & n_2 & n_3 \\ \end{bmatrix} \begin{bmatrix}l_1 & m_1 & n_1 \\l_2 & m_2 & n_2 \\l_3 & m_3 & n_3 \\ \end{bmatrix} = \cdots $$ by working out a few of the elements in terms of the components of $\mathbf {l,m,n}$. Then ask yourself: if $AB=I$, then what is $A^{-1}$? Therefore, what is $BA$? |
H: Solving 3 simultaneous cubic equations
I have three equations of the form:
$$i_1^3L_1+i_1K+V_1+(i_2+i_3+C)Z_n=0$$
$$i_2^3L_2+i_2K+V_2+(i_1+i_3+C)Z_n=0$$
$$i_3^3L_3+i_3K+V_3+(i_1+i_2+C)Z_n=0$$
where $L_1,L_2,L_3,K,V_1,V_2,V_3,C$ and $Z_n$ are all known constants.
What methods can I use to obtain the values of $i_1,i_2$ and $i_3$ ?
AI: A numerical way to solve this would be to use the Newton-Raphson method. This method can be extended to 3 dimensions as follows:
$$\vec{i_{n+1}}=\vec{i_n}-J^{-1}(i_n)\vec{f}(i_n)$$
Where $J$ is the Jacobian matrix of the system:
$$J=
\begin{bmatrix}
3i_1^2L_1+K & Z_n & Z_n \\
Z_n & 3i_2^2L_2+K & Z_n \\
Z_n & Z_n & 3i_3^2L_3+K \\
\end{bmatrix}
$$
Choose an initial "guess" $\vec{i_0}$, and repeat this process. Since it's an iterative process, the more times you evaluated it, the closer you get to the solution. |
H: Can Green's theorem be used in a plane other than the xy-plane?
In the following 2D case, Green's theorem solves the following problem:
$$\vec{F}=\langle{xy+\ln{(\sin{e^{x})},x^2+e^{y^2}}}\rangle$$
$$\oint_C\vec{F}\cdot{d\vec{r}}=\iint_Dx\space{dA}$$
where C is the unit circle $x^2+y^2=1$, and D is the unit disk $x^2+y^2\le{1}$.
But what if instead the problem were:
$$\vec{F}=\langle{\frac{\sqrt{2}}{2}(xy+\ln{(\sin{e^x}}),x^2+e^{y^2},-\frac{\sqrt{2}}{2}(xy+\ln{(\sin{e^x}})}\rangle$$
$$\oint_C\vec{F}\cdot{d\vec{r}}$$
And $C$ is instead the unit circle $\vec{r}(t)=\langle{\frac{\sqrt{2}}{2}\cos{t},\sin{t},-\frac{\sqrt{2}}{2}\cos{t}}\rangle$.
This is the same problem as the first one, but rotated 45 degrees about the y-axis. I know I could solve the second problem by simply rotating the path and the vector field back 45 degrees. But is there any way to apply Green's theorem to the second problem directly?
AI: Green's theorem is just a special case of Stokes' theorem (which is just a special case of the generalized Stokes' theorem).
$$\oint_{\partial M} d\ell \cdot F = \int_M dA \cdot (\nabla \times F)$$ |
H: Solving simple system of congruences
I have this example from wikipedia:
$$x \equiv 3 \pmod 4$$
$$x \equiv 4 \pmod 5$$
$$x = 4a + 3\\
4a + 3 \equiv 4 \pmod 5\\
4a \equiv 1 \equiv -4 \pmod 5\\
a \equiv -1 \pmod 5\\
x = 4(5b - 1) + 3 = 20b - 1
$$
But wikipedia shows $20b + 19$...what did I do wrong here?
AI: Your answer is equivalent since $\ 19 \equiv -1\pmod{20}.\ $ Explicitly $\ 20b\!+\!19 = 20(b\!+\!1)\!-\!1.\,$
Simpler: $\, x \equiv -1\, $ mod $\,4,5\ \Rightarrow\ x\equiv -1\,$ mod $\,{\rm lcm}(4,5) = 20,\ $ i.e. $\ 4,5\mid x\!-\!1\,\Rightarrow\, 4\cdot 5\mid x\!-\!1.$ |
H: Element by element formulae for 3x3 matrix inversion
Given a 3 x 3 matrix:
$$
A= \begin{bmatrix}
a & b & c \\
d & e & f \\
g & h & i \\
\end{bmatrix}
$$
Can $A^{-1}$ be shown as as a 3x3 matrix with each element in terms of $a,b,c,d,e,f,g,h$ and $i$. Showing basic operators (only $+ - / *$)?
AI: $$A^{-1}=\frac{1}{\det(A)} adj(A)$$
$$\det(A)=aei+dhc+bfg-ceg-bdi-afh$$
$$adj(A)= \begin{bmatrix}
ei-fh & ch-bi & bf-ce \\
fg-di & ai-cg & dc-af \\
dh-ge & bg-ah & ae-bd \\
\end{bmatrix}$$
_ |
H: Definition of inner product
I'm studying the inner product part in linear algebra and it's a bit tricky to understand what inner product really means. It indicates the length of vectors? So if we calculate $<x,y>$ we have to make it as a scalar?
Here is an example. If $x$ and $y$ are column vector of $2*2$ matrix and $F=R$ then we calculate $\langle x,y\rangle=x^T y$. I wonder it is equivalent to $x^T y=y^T x$ (because it results in the same value)? Seriously, there is no difference?
Also, the example says if $x$ and $y$ are column vector of $2*2$ matrix and $F=R$ then $\langle x,y\rangle=y^* x$ where $*$ means conjugate transpose. I can't understand why...
AI: The inner (or dot) product of two real-valued vectors is defined by
$$\sum_ix_iy_i$$
where $x_i$ and $y_i$ are the components of the two vectors. This is why $x^Ty=y^Tx$. If the vectors are complex-valued the inner product is defined by
$$\sum_ix_iy_i^{*}$$
This results in $x^{*}x=||x||^2$ (real-valued norm), which is what you want, also for complex vectors. |
H: Does There Exist a Term for the Unique Nonpositive Square Root of a Nonnegative Real Number?
The term "principal square root" describes the unique nonnegative square root of a nonnegative real number.
Does there exist a term to describe the unique nonpositive square root of a nonnegative real number?
AI: I have heard "negative square root." It's about the simplest and most aptly descriptive thing I have heard. |
H: Solving modular equation
$$13863x \equiv 12282 \pmod {32394}$$
I need to solve this equation. If I'd found the inverse of 13863 and multiply the equation by this, I'd get the solution. So:
$$13863c \equiv 1 \pmod {32394}$$
And now - how can I find this inverse? The numbers are too big to just look for the multiplier of $13863$Β .Β .Β .
AI: The $\gcd$ if $32394$ and $13863$ is $3$, so there is no inverse. What you need to do is divide out any common factors in the original equation: everything here is divisible by $3$.
$$13863x\equiv 12282\bmod 32394$$
In general, if you want to solve $$dx\equiv da\bmod dm,$$
then for any solution $y$ of the congruence $x\equiv a\bmod m$, the numbers $$y,\quad y+m,\quad\ldots,\quad y+(d-1)m$$
will all be solutions to $dx\equiv da\bmod dm$, because
$$d(y+km)=dy+k(dm)\equiv dy\equiv da\bmod dm.$$
When faced with a congruence where the $\gcd$ of all the terms, together with the modulus, is not $1$, divide through by their $\gcd$, solve that congruence, and then "move back up" to obtain the solutions to your original congruence.
So, let's solve
$$4621x\equiv 4094\bmod 10798$$
(which is what we get after dividing everything by their $\gcd$ of $3$). As you've noted in your question, we want to find the inverse for $4621$ modulo $10798$.
Run the Euclidean algorithm for computing $\gcd(4621,10798)$:
$$\begin{align*}
10798 &= 2\cdot 4621 + 1556\\
4621&= 2\cdot 1556 + 1509\\
4668&=1\cdot 1509 + 47\\
1509 & = 32\cdot 47+ 5\\
47&= 9\cdot 5+2\\
5&=2\cdot 2+\fbox{1}\\
2&=2\cdot 1+0
\end{align*}$$
then "run it backwards":
$$\begin{align*}
\fbox{1}&=\mathbf{5-2\cdot2}\\
&\quad =5-2\cdot (47-9\cdot 5)\\
&\quad =5-2\cdot 47+ 18\cdot 5\\\\
&=-2\cdot 47+ 19\cdot 5\\
&\quad=-2\cdot 47+19\cdot(1509- 32\cdot 47)\\
&\quad= -2\cdot 47+19\cdot 1509-608\cdot 47)\\\\
&=19\cdot 1509-610\cdot 47\\
&\\
&\vdots\\
&\\
&=u\cdot 4621+v\cdot 10798
\end{align*}$$
And now we have our inverse: $u$, because
$$u\cdot 4621=1-v\cdot 10798$$
implies that
$$u\cdot 4621\equiv 1\bmod 10798.$$ |
H: Big Greeks and commutation
Does a sum or product symbol, $\Sigma$ or $\Pi$, imply an ordering?
Clearly if $\mathbf{x}_i$ is a matrix then:
$$\prod_{i=0}^{n} \mathbf{x}_i$$
depends on the order of the multiplication. But, even if one accepts that it has a sequence, it is not clear if it should mean $\mathbf{x}_0\mathbf{x}_1 \cdots \mathbf{x}_{n-1}\mathbf{x}_n$ or $\mathbf{x}_n\mathbf{x}_{n-1} \cdots \mathbf{x}_{1}\mathbf{x}_0$.
A similar question, is there a "big" wedge product convention?
$$\overset{n}{\underset{i=0}{\Huge\wedge}} \;{}^{\Large{\mathbf{x}_i} \;=\; \mathbf{x}_0 \wedge \mathbf{x}_1 \;\cdots \mathbf{x}_{n-1}\; \wedge \mathbf{x}_{n}} $$
AI: I think that even if it's not written explicitly anywhere, the $\mathbf{x}_0\mathbf{x}_1 \cdots \mathbf{x}_{n-1}\mathbf{x}_n$ convention is the most predictable and sensible.
I've never seen the distinction made explicit, since in most circumstances the operation involved is commutative.
I did see somewhere on m.SE someone suggest $\mathbf{x}_i\prod_{i=1}^n$ to denote $\mathbf{x}_n\mathbf{x}_{n-1} \cdots \mathbf{x}_{1}\mathbf{x}_0$, but that may have been with tongue in cheek... |
H: Generating functions to solve recurrence relation
Use generating functions to solve the recurrence relation
$$ a_{n} = 3a_{nβ1} + 2 $$
with initial condition $a_{0} = 1$.
If I can bring it to $ a_{n}=k a_{n-1} $ I can solve it easily. Thank you
AI: Here is a start
$$ \sum_{n=0}^{\infty}a_{n+1}x^n = 3 \sum_{n=0}^{\infty} a_{n}x^n + 2\sum_{n=0}^{\infty} x^n. $$
Now, recall this
$$ \sum_{n=0}^{\infty} x^n =\frac{1}{1-x}.$$
I think you can finish it now. Just follow the technique in this problem. |
H: Continuous random variable question
$ X $ is a non-negative continuous random variable with density function $f$ and distribution function $F$.
Use integration by parts to show that
$ \int_0^{\infty} ( 1- F(x)) dx =
\int_0^{\infty}xf(x)dx $
I'm quite puzzled on how to even integrate $F(X)$ to get $f(x)$ :S
AI: Hint: In the usual calculus notation for integration by parts, let $u=1-F(x)$ and $dv=dx$. Then $du=-f(x)\,dx$ and we can take $v=x$.
Added: As pointed out by Π±Π΅ΡΠΊΠ°ΠΉ. the fact that $\lim_{x\to\infty}x(1-F(x))=0$ requires justification. This takes a few lines. For details, please see this part of the Wikipedia article on cumulative distribution functions. |
H: Proof of cauchy schwarz inequality in inner product space
$0 \le \lVert x-cy \rVert ^2= \langle x-cy,s-cy \rangle = \langle x,x\rangle -\bar{c}\langle x,y\rangle-c\langle y,x\rangle +c\bar{c}\langle y,y\rangle$.
If we set $c=\frac{\langle x,y\rangle}{\langle y,y\rangle}$ then the inequality becomes
$0 \le \langle x,x\rangle - \frac {\lvert \langle x,y\rangle \rvert^2}{\langle y,y\rangle}$.
I calculate the above equation ans wonder why $\langle x,y\rangle \langle y,x\rangle$ becomes $\lvert\langle x,y\rangle\rvert^2$? In complex field that's not true, isn't it?
AI: Note that for any complex number $\alpha$, we have: $$\alpha \overline{\alpha}=|\alpha|^2$$where $|\alpha|$ is the modulus (or length) of the complex number $\alpha$. |
H: How to solve this minimization problem?
I have a question which asks:
A cylinder shaped can holds $5000cm^3$ of water. Find the dimensions that will minimise the cost of metal in making the can.
What I did was express the height in terms of the radius as:
$$h=\frac{5000}{(\pi)(R^2)}$$
Then I differentiate it to:
$$\frac{(\pi)(R^2)-2R}{((\pi)(R^2))^2}$$
Now, where should I go from here? should I let it equal zero or should I do something else?
AI: Hint: The amount of metal needed to make the can would consist of the bottom and the lid (both of which are circles) and the cylinder which when unrolled is a rectangle. Thus, you have to formulate your problem as follows:
Objective: Minimize Area of (bottom + lid + cylinder)
Subject to the condition that:
Volume of cylinder = 5000
Note: You have two variables $r$ and $h$. Use the constraint to eliminate one of these variables from the objective function and then use calculus to find the optimum. |
H: Analysis: Integration (Riemann/Step functions)
Using the definition of the integral of a continuous function, and
that $\displaystyle\sum_{j=0}^{2^n-1} j = (2^n-1)2^{n-1}$ to show that
$\int_0^1x \ dx = \frac{1}{2}$
I'm having trouble even starting this question as I'm not sure how to take advantage of the sum given.
Thanks!
AI: Since $f$ is continuous, it is integrable. Hence any Darboux sum converges to integral of $f$ when partition's tightness tends to $0$. Now I suggect you to consider the simplest partition $\Delta_j:=[\frac{j}{2^n},\frac{j+1}{2^n}]$ with points $\xi_j=\frac{j}{2^n}$, where $j=0,\ldots, 2^{n}-1$ |
H: Proving that the independent set problem is in NP-Complete
Consider the problem of "Independent set" in grahps. Given a graph $G$ and an integer $k$, the machine determines whether the graph $G$ contains an independent set of size $k$.
I need to prove that it's in NP-Complete by showing a reduction from another known language in NP-Complete (vertex cover, clique, sat) to IS problem. which one would you suggest?
Thanks in advance
AI: Vertex cover. If $S$ is a vertex cover of $G$ of size $|V(G)| - k$, then the collection of vertices of $G$ not in $S$ must form an independent set (why?) of size $k$. |
H: Improper Integral $\int_{1/e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $
I need some advice on how to evaluate it.
$$\int\limits_\frac{1}{e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $$
Thanks!
AI: Here's a hint:
$$
\int_{1/e}^1 \frac{1}{\sqrt{\ln x}} {\huge(}\frac{dx}{x}{\huge)}.
$$
What that is hinting at is what you need to learn in order to understand substitutions. It's all about the chain rule. The part in the gigantic parentheses becomes $du$. |
H: How do I divide a set of data samples which follow a logarithmic distribution?
I'm working for the first time with Logarithmic distribution. I have a set of samples which follow logarithmic distribution. I extracted the maximum and the minimum values from the set and defined the interval as [min,max]. Now I need to partition this range into 5 parts. Say [min,a], [a,b], [b,c], [c,d] and [d,max].
If I calculate the mean, then I get two splices. I need 5 such. How do I go about it? Any pointers on articles, web pages and sources that describe and explain related concepts are also well appreciated.
AI: Two natural ways are equal intervals, so $a=\min+(\max-\min)/5$ and so on, and equal ratios, so define $r=\left(\frac {\max}{\min}\right)^{\frac 15}$, then $a=r\min, b=r^2 \min$,etc. These work for any distribution. Ratios doesn't work if $\min \lt 0 \lt \max$ |
H: Is restriction of scalars well-defined on subspaces?
Let $K/k$ be an extension of fields and let $v_1,\ldots,v_r,u_1,\ldots,u_r\in k^n$. If the span of the $v$'s over $K$ equals the span of the $u$'s over $K$, must the two spans also be equal over $k$?
$$_K\langle v_1,\ldots,v_r\rangle=_K\langle u_1,\ldots,u_r\rangle\qquad\overset{?}\Longrightarrow\qquad_k\langle v_1,\ldots,v_r\rangle=_k\langle u_1,\ldots,u_r\rangle$$
Here is another way to look at this question: Given a subspace $U\subset K^n$ with some basis in $k^n$, is the procedure of restricting scalars to $k$ well-defined, or does it depend on the choice of basis of $U$?
AI: Step 1: A subset $S \subset k^n$ is $k$-linearly independent iff it is $K$-linearly independent. This can be shown for instance by expressing linear independence in terms of the reduced row echelon form (rref) of the matrix with columns the elements of $S$ and appealing to the uniqueness of rref.
(Alternate proof: if $S$ is $k$-linearly independent, extend it to a $k$-basis $B$ of $k^n$. The matrix $M$ with columns the elements of $B$ is invertible, i.e., there is a matrix $N$ with $MN = NM = 1$. This matrix equation holds over $K$ as well and shows that the columns of $M$ are $K$-linearly independent.)
Step 2: To ask whether
$\langle u_1,\ldots,u_r \rangle_K \subset \langle v_1,\ldots,v_s \rangle_K$
we may assume without loss of generality that $v_1,\ldots,v_s$ are $k$-linearly independent, because by throwing away some of the $v_j$'s we can reduce to this case without changing the span (either over $k$ or over $K$). Then the above containment holds iff for all $i$, the set $v_1,\ldots,v_s,u_i$ is $K$-linearly dependent. As above, this holds over $k$ iff it holds over $K$, so the given containment implies
$\langle u_1,\ldots,u_r \rangle_k \subset \langle v_1,\ldots,v_s \rangle_k$.
Step 3: The affirmative answer to your question follows immediately from Step 2. |
H: Binomial coefficient series $\sum\limits_{k=1}^n (-1)^{k+1} k \binom nk=0$
I'm practicing for my maths term test mainly on binomial coefficients. I can't seem to find out how to prove the following identity. Any advice?
$$ \sum\limits_{k=1}^n (-1)^{k+1} k{{n}\choose k} = 0 $$
Thanks in advance.
AI: Hint: Note that for $x\ge 1$, $x\dbinom{n}{x}=n\dbinom{n-1}{x-1}$. This can be verified by using the representation of the binomial coefficients in terms of factorials.
We can bring the $n$ to the front, and end up with something that is perhaps familiar. If it is not familiar, expand $(1-1)^m$ using the Binomial Theorem. |
H: Can you raise $\pi$ to a real power to make it rational?
We're all familair with this beautiful proof whether or not an irrational number to an irrational power can be rational. It goes something like this:
Take $(\sqrt{2})^{\sqrt{2}}$
If it's rational, then you proved it, if it's irrational, take $((\sqrt{2})^{\sqrt{2}} ){^\sqrt{2}} = 2$ and you've proved it.
I'm wondering if you can raise $\pi$ or $e$ to a certain non-trivial real power to make it rational? And if not, where is the proof that it can't be done?
p.s. - I almost left out the real part, but then I realized that $e^{i\pi} = -1$.
AI: Of course. Pick any positive rational $p$ and let $x=\log_\pi p$, then $\pi^x=p$. |
H: Measuring distances on any coordinate system
I was reading the book The ABC of Relativity from Betrand Russell, and at some point, the author mentions a method for measuring the distance between 2 points on any coordinate system. He says that the formula was discovered by Gauss and is a generalization of the pythagorean theorem. Although, it doesn't writes the explicit formula in mathematics. Since I have no more information, couldn't find anything on it. If I understand the formula correctly, it should be something like this when the coordinates can be expressed with 2 components $(x;y)$:
$$d(A,B)^2= (x_A-x_B)^2+(y_A-y_B)^2+(x_A-x_B)(y_A-y_B)$$
Is this formula correct and where it comes from?
AI: I'm unsure if my answer will cover what you want, but since you've found this statement in a book of Relativity this probably is in the context of differential geometry. Well, the idea is that you can generalize the notion of distance to any manifold you like by introducing the notion of a Riemannian Metric and the notion of a Geodesic. When I first learned General Relativity the books were a little confusing about that so I understand your worry.
Well, what are all of these and what the hell they have to do with Gauss? Well, first, manifold is a generaliztion of surface. Gauss studied surfaces in $\mathbb{R}^3$, however, mathematicians saw that the results obtained didn't depend on the structure of the background euclidean space, this leads to a generalization to a much more general object called manifold, which in simplest terms is one space on which we can do geometry as we do on surfaces. A manifold is characterized by the nice property that each of it's points has a neighbourhood equivalent in some sense to the euclidean space.
The classical way to tackle the problem of distances in surfaces is to introduce the object called the first fundamental form. The idea is that at each point $p$ of the surface $S$ we plug the tangent plane $T_pS$ and the elements of the tangent plane are the tangent vectors to $S$ at the point $p$ (imagine this, is not hard). We will be able to measure distances if we introduce on each of the tangent planes one inner product. Since the surface is a subset of $\mathbb{R}^3$ we already have one candidate to the inner product: we just restrict that of $\mathbb{R}^3$. Then if $v,w\in T_pS$ let's denote $\left\langle v, w\right\rangle_p$ the inner product of $\mathbb{R}^3$ restricted to $T_pS$. Now we introduce the first fundamental form as $I_p : T_pS \to \mathbb{R}$ by requiring that:
$$I_p(w)=\left\langle w, w\right\rangle_p$$
Now look at what we did: in $\mathbb{R}^3$ alone the inner product allows us to compute the length of vectors by taking the square root of the inner product of the vector and itself. Now the tangent plane is a subset of $\mathbb{R}^3$ so we can restric the inner product to the tangent plane and use it's inner product to define that quadratic form. That quadratic form gives us the notion of length of vectors in the tangent plane. Now if you recall what is the length of a curve and how do we compute it you'll see that introducing this quadratic form we are allowed to compute the length of curves over surfaces.
But think for a while, in three-space from all possible curves joining two points $A$ and $B$, which of them we select the length to call distance from $A$ to $B$? Well, we select that that gives us the minimum length which is the line segmenet joing the points. So in a surface something similar happens, we define a curve in the surface that is the analog of a line in a surface, this curve we call a geodesic, and when we pick a geodesic between points we call the length of the geodesic the distance between the points.
You might say: that's all cool, but I want General Relativity! It's a four dimensional space time not embedded into any other higher dimensional space! And what I tell you is: calm down, it is all related. The problem is that when you remove the background space you have to be more careful, you'll need to introduce the notion of a manifold, and since we don't have the background space we'll have to refine the notion of a tangent space and of the inner product. In practice, the inner product association to each point will become a tensor field called the metric tensor (which is the term $g_{ab}$ which appears in Einstein's Equation).
And finally how do you come to learn all of this? Well, my recommendation is: to get started study Do Carmo's Differential Geometry of Curves and Surfaces - it's a book about the differential geometry of surfaces in $\mathbb{R}^3$, good to get started. Then you'll need to move to the differential geometry of manifolds. My recommendation is that you pick Spivak's A Comprehensive Introduction to Differential Geometry. Spivak however uses the notion of metric spaces, if you don't know anything about them you can go after a book about metric spaces, and to study this my recommendation is to get Rudin's Principles of Mathematical Analysis and read the chapter of metric spaces and after it get some topology book and read the chapter of metric spaces.
I hope this helps you somehow. Good luck! |
H: Proving combinatoric identity using the inclusion-exclusion principle
I need to prove the identity ${\displaystyle \sum_{k=0}^{n}(-1)^{k}{2n-k \choose k}2^{2n-2k}=2n+1}
$ using the inclusion-exclusion principle. It was hinted to think about the number of ways to color the numbers ${1,..,2n}$ in the colors red and blue such that if $i$ is colored red so is $i-1$. Well, it is clearly why the number of ways to do it is $2n+1$, but I don't manage to show why it also equal to the left side of the equation.
Anyone can give me a hint about the sets I should define in order to use the inclusion-exclusion principle? Thanks!
AI: There are $2^{2n}$ colourings in total, $2^{2n-2}{2n-1\choose 1}$ ways to pick a pair of adjacent numbers and colour the first blue and the second red and the rest any colour, $2^{2n-4}{2n-2\choose 2}$ ways to pick two pairs of numbers and have both with the first color blue and the second colour red and the rest any colour, and so on. |
H: Need help with $\int_0^\infty x^{-\frac{3}{2}}\ \text{Li}_{\sqrt{2}}(-x)\ dx$
I need help with solving this integral:
$$\int_0^\infty x^{-\frac{3}{2}}\ \text{Li}_{\sqrt{2}}(-x)\ dx,$$
where $\text{Li}_{s}(z)$ is the polylogarithm.
AI: $$\int_0^\infty x^{-\frac{3}{2}}\ \text{Li}_{\sqrt{2}}(-x)\ \mathrm dx=-2^{\sqrt{2}}\pi.$$
Proof:
Use formula (3) from here to get an integral representation of the polylogarithm:
$$\text{Li}_s(-x)=-\frac{1}{\Gamma(s)}\int_0^\infty\frac{k^{s-1}}{\frac{e^k}{x}-1}\mathrm dk.$$
Then, changing the order of integration,
$$\int_0^\infty x^{-p}\ \text{Li}_s(-x)\mathrm dx=-\frac{1}{\Gamma(s)}\int_0^\infty\int_0^\infty\frac{x^{-p}\,\ k^{s-1}}{\frac{e^k}{x}-1}\mathrm dx\ \mathrm dk=\\\frac{\pi}{\Gamma(s)\sin \pi p}\int_0^\infty e^{k(1-p)}\ k^{s-1}\mathrm dk=\frac{\pi}{(p-1)^s \sin \pi p}$$ |
H: Is a series (summation) of continuous functions automatically continuous?
I'm being asked to show that a given series (of rational functions) converges uniformly on a given disc, and then and asked to use this fact to show that integrating its limit function (i.e. a summation of rational functions) along a given contour results in a given series.
In the solution provided, the arugment is that each of the rational function is continuous, thus the uniform convergence implies that the limit function is continuous too.
I don't understand the need to invoke the uniform-convergence-implies-continuous-limit theorem. Isn't a series of rational functions already automatically continuous?
AI: Consider the following sequence:
$$f_1(x) = x \\
f_2(x) = x^2 - x \\
f_3(x) = x^3 - x^2 ...$$
Each function is a rational function (even a polynomial) and obviously continuous.
Now look at the series $\sum_n f_n(x)$ in the interval $[0, 1]$. This converges (pointwise) to $0$ in $[0, 1)$ and to $1$ at $x=1$, which is not continuous.
Historical note: The following is taken from the Wikipedia article List of incomplete proofs.
In his Cours d'analyse of 1821, Cauchy "proved" that if a sum of continuous functions converges pointwise, then its limit is also continuous. However, Abel observed three years later that this is not the case. For the conclusion to hold, "pointwise convergence" must be replaced with "uniform convergence". There are many counterexamples. For example, a Fourier series of sine and cosine functions, all continuous, may converge to a discontinuous function such as a step function. |
H: About continuous functions and aritmethic progression
I've try solve this question, but I haven't sucess...
The problem is the following:
A continuous functions $f:[a,b]\rightarrow \mathbb{R}$ assume positive and negative values in its domain, show that there exists $a_1,a_2,\ldots,a_k$, k numbers that are arithmetic progression and it is in the domain such that
$$f(a_1)+f(a_2)+\cdots+f(a_k)=0$$
Someone can help me with this question?
AI: Fix $k$. Let the domain of the function include the interval $[a,b]$, where $f(a) \times f(b) < 0 $.
Consider the value of $ f(a) + f( a + \frac{b-a}{k-1}) + \ldots + f(b) $. If this is 0, we are done. Otherwise, WLOG we may assume that it has different sign as $f(a)$.
Now, consider the function
$$F(a, \epsilon) = f(a) + f(a + \epsilon) + f(a + 2 \epsilon) + \ldots + f(a + (k-1)\epsilon ), $$
where $\epsilon \in [0, \frac{b-a} {k-1}] $.
Apply the Intermediate-Value Theorem, and we are done. |
H: Does this polynomial factorize further?
I just did a national exam and this question was in it; I am convinced this does not work:
Given that $(x - 1)$ is a factor of $x^3 + 3x^2 + x - 5$, factorize this cubic fully.
My attempt
1 | 1 3 1 -5
| 1 4 5
|____________
1 4 5 0
$$(x - 1)(x^2 + 4x + 5) = 0$$
That's all I got. I gave up and couldn't factorize further. Am I missing something?
AI: The discriminant of your polynomial is $$\Delta=4^2-4\cdot 5<0$$ so this has no real roots, that is, factorization over $\Bbb R$ is finished. On the other hand, factorization over $\Bbb C$ is possible, yielding
$$x^2+4x+5=(x+2)^2+1=(x+2-i)(x+2+i)$$ |
H: Can't establish a lower bound on a supremum
I have a sequence of functions $f_{k,j}:[0,1]\to\mathbb{R}$ defined by
$$f_{k,j} = k^{\frac{1}{p}}\chi_{(\frac{j-1}{k},\frac{j}{k})},$$
for all $k\geq 1,1\leq j\leq k$.
This serves as an example of a sequence that converges to $0$ in measure, but not in $L^{p,\infty}$.
It is easy to verify convergence in measure, but I am trying to check that $\|f_{k,j}\|_{p,\infty}\geq 1$ for all $k\geq 1$. The following proof is given which I cannot follow:
(Take $\mu$ to be the Lebesgue measure.)
\begin{eqnarray*}
\|f_{k,j}\|_{p,\infty} &=& \sup_{\alpha > 0}\alpha\cdot\mu(\{x\in [0,1] : f_{k,j}(x) > \alpha\})^{\frac{1}{p}}\text{, (by definition)}\\
&\geq& \sup_{k\geq 1}\frac{(k - \frac{1}{k})^{\frac{1}{p}}}{k^{\frac{1}{p}}}\\
&=& 1
\end{eqnarray*}
I am stuck trying to verify the $(\geq)$ step. At first I thought it was just restricting the supremum to the range of $k\geq 1$ which is more specific than $\alpha > 0$, but $k$ is already being used as an index.
Any ideas?
AI: As David Mitra pointed out
$$
\mu(\{x\in[0,1]:f_{k,j}(x)>\alpha\})=
\begin{cases}
0\quad&\text{ if }\quad \alpha>k^{1/p}\\
k^{-1}\quad&\text{ if }\quad \alpha\leq k^{1/p}
\end{cases}
$$
So
$$
\alpha\mu(\{x\in[0,1]:f_{k,j}(x)>\alpha\})^{1/p}=
\begin{cases}
0\quad&\text{ if }\quad \alpha>k^{1/p}\\
\alpha k^{-1/p}\quad&\text{ if }\quad \alpha\leq k^{1/p}
\end{cases}
$$
Hence supremum is attained for $\alpha_0=k^{1/p}$ and now we conclude
$$
\sup\limits_{\alpha>0}\alpha\mu(\{x\in[0,1]:f_{k,j}(x)>\alpha\})^{1/p}=\alpha_0k^{-1/p}=1
$$ |
H: Does the splitting principle define chern classes for vector bundles if they are known for line bundles?
Suppose we have definied chern classes $c_i$ for line bundles for all $i\geq 0$.
Let $E\to B$ be a rank $r$ vector bundle. By the splitting principle there exists a map $f:Y\to B$ such that $f^*E$ is a sum of line bundles $L_1\oplus\ldots\oplus L_r$ and $f^*:H^*(B)\to H^*(Y)$ is injective.
Does the splitting principle define chern classes for vector bundles if they are known for line bundles?
The splitting principle says only, that if the chern classes are defined in two ways (natural with respect to continuous maps), one can check on line bundles if the two ways end up to be the same. I don't see how the splitting principle could be used to define the classes because I fear that the decomposition into line bundles (or, the space $Y$ such that $f^*E$ splits) is not unique.
AI: To answer the question in the title: yes. You are really asking if the Chern classes defined via a splitting map are independent of the choice of splitting map $f$. First note that there exists a splitting map $f$ making the cohomology $H^*(Y)$ a free module over the cohomology of the base $H^*(B)$---I believe this follows from the usual construction of a splitting map via flags of subbundles.
Given another splitting map $f': Y' \rightarrow B$, form the fiber product $Y' \times_B Y$, with the canonical map to $B$ (induced by $f'$ or $f$). The map to $B$ from this fiber product corresponds to the map $R \rightarrow S' \otimes_R S$ on cohomology, where $R=H^*(B)$, $S'=H^*(Y')$ and $S=H^*(Y)$, and is hence still injective (here we use the Eilenberg-Moore spectral sequence, which actually degenerates immediately by the freeness of $H^*(Y)$ over $H^*(B)$). Inside $H^*(Y' \times_B Y)=S' \otimes_R S$ the Chern classes become equal, proving independence. |
H: Find the relation between the dimension of the nullspace of $A$ and $A^t$
Let $A$ be a $n \times n$ matrix, what is the relation between the dimension of the nullspace of the homogeneous system of $A$ and the one of $A^t$?
AI: Hint: For a given matrix $A$, the dimension of the left null space is equal to the dimension of the right null space. |
H: How are these inequalities simplified?
How does this:
(a > b && a > c && b <= c) ||
(a > b && a <= c && b < c)
simplify down to this:
a > b && b <= c
Whereas this similar expression
(a <= c && c >= b && a <= b) ||
(a <= c && c < b && a < b)
does not simplify to
a <= c && a <= b
but instead to
(b <= c && a <= b) ||
(b > c && a <= c)
?
Edit: Just to clarify, I can work it out in my head, but I'm trying to implement the simplification in code for general cases, and I'm struggling to understand the logic that separates the above two equations.
AI: For the first part:
If a > b && b <= c, a > b && a > c && b <= c simplifies to a > c.
Similarly, a > b && a <= c && b < c simplifies to a <= c && b < c.
Now, let a > b && b <= c. Then there are two cases:
a > c. Then the first condition will hold.
a <= c. Then b < a <= c, so b < c holds and the second inequality is true.
This second step is not possible in the other case. You cannot derive a similar order, because the arrangement of variables is changed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.