text
stringlengths 83
79.5k
|
---|
H: How to evaluate $ \lim_{(x,y)\to(0,0)}\frac{y^2\sin^2x}{x^4+y^4} $
$$
\lim_{(x,y)\to(0,0)}\frac{y^2\sin^2x}{x^4+y^4}
$$
I believe the limit doesn't exist but I'm not sure how to prove/evaluate correctly. A hint would be great. Thanks.
AI: Write
$$\mathop {\lim }\limits_{(x,y) \to (0,0)} \frac{{{y^2}{x^2}}}{{{x^4} + {y^4}}}{\left( {\frac{{\sin x}}{x}} \right)^2}$$
Then choose $y=\alpha x$ and see what happens. Of course, use $$\frac{\sin x}x\to 1$$ |
H: Composition of exponential function with discontinuous function
Suppose $f(x)$ be a real discontinuous everywhere function. Then is $\exp(f(x))$ also discontinuous?
AI: Let $g=\exp(f)$. Since $f$ is real, then $g$ is a positive function. But if $g$ were continuous at some $x$, then $f=\ln(g)$ would be continuous there, too (since the natural logarithm is defined and continuous on the positive reals). Thus, $g$ is discontinuous everywhere.
More generally, if we have functions $f_1,f_2$ such that $f_2\circ f_1$ "makes sense," and if we know that $f_2$ is invertible and has a continuous inverse, then $f_2\circ f_1$ is (dis)continuous in exactly the same places that $f_1$ is. |
H: proof that the series converges?
I just need to make sure that I do it correctly
Thanks in advance,
AI: You can use the fact $\ln(x) < x$ which makes it easier
$$ \int_{0}^{1}\frac{\ln(x)}{1+x^2}dx < \int_{0}^{1}\frac{x}{1+x^2}dx=\frac{1}{2}\ln(1+x^2)|_{x=0}^{1}=\dots. $$
Note: Just note that, $ \frac{\ln(x)}{1+x^2} $ is Riemann integrable (in the extended sense if you wish and $x\sim 0$ $\frac{\ln(x)}{1+x^2}\sim \ln(x)$ ) and by the property of the Riemann integral, for $f$ and $g$ Riemann integrable functions such that
$$ f \leq g\quad \forall x\in[a,b] \implies \int_{a}^{b} f(x)dx \leq \int_{a}^{b} g(x)dx, $$
then above answer is true
Another approach: Note that the integrand has a singularity at $x=0$, so we have
$$ x\sim 0 \implies \frac{\ln(x)}{1+x^2} \sim \ln(x) $$
and using the fact
$$ \int_{0}^{1}\ln(x)dx < \infty $$
the original integral converges. |
H: Domain, Co-Domain, and Linearity of Linear Systems homework check.
I am asked to find the domain, co-domain, and to determine whether of not the transformation is linear. I'm not sure if I am doing this properly, so I figured I would ask as my textbook doesn't have solutions.
If someone could explain how to determine the linearity of the system I would appreciate it, right now I am just guessing.
$$w_1 = 3x_1 - 2x_2 +4x_3$$
$$w_2 = 5x_1 - 8x_2 + x_3$$
Domain = $R^3$
Co-Domain = $R^2$
Linearity: False?
$$w_1 = 2x_1x_2 - x_2$$
$$w_2 = x_1 + 3x_1x_2$$
$$w_3 = x_1 + x_2$$
Domain = $R^2$
Co-Domain = $R^3$
Linearity: False?
$$ w_1 = 5x_1 - x_2 + x_3 $$
$$ w_2 = -x_1 + x_2 + 7x_3 $$
$$ w_3 = 2x_1 - 4x_2 - x_3 $$
Domain: $R^3$
Co-Domain: $R^3$
Linearity: False?
$$w_1 = x_1^2-3x_2+x_3-2x_4$$
$$w_2 = 3x_1 - 4x_2 -x_3^2 + x_4 $$
Domain: $R^4$
Co-Domain: $R^2$
Linearity: False?
AI: For all of these, you have the codomain and domain correct, but some of them are in fact linear. Remember that a function $f: \Bbb{R}^n \rightarrow \Bbb{R}^m$ is defined to be linear if the following two conditions hold:
For all $u$, $v$ in $\Bbb{R}^n$, $f(u+v) = f(u) + f(v)$
For all $u$ in $\Bbb{R}^n$ and $a$ in $\Bbb{R}$, $f(a \cdot u) = af(u)$
This does hold for your first and third functions (for example, because, in the first function, letting $f(x_1,x_2,x_3) = (w_1,w_2)$,
$$f(x_1+y_1,x_2+y_2,x_3+y_3) = \\(3x_1 + 3y_1 - 2x_1 - 2y_1+4x_1 + 4y_1, 5x_1 + 5y_1 - 8x_1 - 8y_1 + x_1 + y_1) = \\(3x_1 - 2x_1 + 4x_1, 5x_1 - 8x_1 + x_1) + (3y_1 - 2y_1 + 4y_1, 5y_1 - 8y_1 + y_1) = \\f(x_1,x_2,x_2) + f(y_1,y_2,y_3),$$
and essentially the same argument gives the second property.
You should think about what, exactly, it is about the second and fourth equations that makes them nonlinear. |
H: Relationship between $\int_a^b f(x)\,dx$ and $\int_a^bxf(x)\,dx$
I was working a problem, and came to a point where it would help greatly if there was a relationship between the following two expressions:
1) The numeric value of $\int_a^b f(x)\,dx$, and
2) The numeric value of $\int_a^bxf(x)\,dx$.
That is, if I know the numeric value for the first integral, but nothing about $f(x)$, is it possible to determine the numeric value for the second?
I've tried experimenting with integration by parts, but that always seems to need the indefinite integral of $F(x)$.
EDIT: In response to Calvin Lin's comment: I'm looking to compute the second integral, so equalities are the type of relationship that I'm looking for.
EDIT 2: In response to JoeHobbit's comment: The particular problem that I'm trying to solve is really this one here. However, this sprung off some other thoughts, not necessarily tied to specific problems.
AI: It's impossible to determine the numerical value of $\int_a^bxf(x)dx$ exactly from $\int_a^bf(x)dx$ (for example, $\int_0^1 xdx = \int_0^1(1-x)dx = \frac{1}{2}$, but $\int_0^1x\cdot xdx = \frac{1}{3} \ne \int_0^1x(1-x)dx = \frac{1}{6}$. However, you still know some things. As you mentioned, integration by parts tells you that $\int_a^bxf(x)dx = bF(b) - aF(a) + \int_a^bF(x)dx$, where $F(x)$ is an antiderivative of $f(x)$. If you could find bounds to the antiderivative, you could tell a bit about this integral. Another inequality was given in the comments, but you can't get an equality. |
H: How to prove that this sequence converges to zero?
Given a natural number $k$ and a real number $a$ such that $|a|<1$, defines a sequence $(x_n)_{n\in\mathbb{N}}$ by
$$x_n = \frac{a^nn!}{k!(n-k)!}.$$
Show that $x_n\rightarrow 0$.
AI: Note the following:
$$
\lim_{n\to\infty}\left|\frac{x_{n+1}}{x_n}\right|
=\lim_{n\to\infty}\left|\frac{a^{n+1}(n+1)!}{k!(n+1-k)!} \cdot\frac{k!(n-k)!}{a^nn!}\right|\\
=\lim_{n\to\infty}\left|\frac{a(n+1)}{n+1-k}\right|\\
=|a|<1
$$
What does that tell you about the sequence? |
H: If $\dim V=v$ and $\dim(\ker T)=n$, prove that $T$ has at most $v-n+1$ distinct eigenvalues
Let $T:V\to V$ be a linear operator. If $\dim V=v$ and $\dim(\ker T)=n$, prove that $T$ has at most $v-n+1$ distinct eigenvalues.
I have been working on this proof for a few days and I am not sure what direction to really go with it? I feel like starting with the rank nullity theorem is correct and relating that to the sum of eigenspaces may be my next move. Though I cant think of how to bring these two ideas together to create a fluid proof? Thank you for your help...
AI: I will assume that $V$ is a vector space over some field $k$ because you do not mention the ground field.
Suppose $T$ has the set of eigenvalues $\lambda_0$, $\lambda_1$, ..., $\lambda_m$. Since distinct eigenspaces are in a direct sum, write $E_i$ for the eigenspace corresponding to $\lambda_i$. Then
$$
v = \dim(V) \ge \dim E_0 + \dim E_1 + \dots + \dim E_m \ge n+m.
$$
where $m$ is the number of non-zero eigenvalues. (The first inequality is because the direct sum of the eigenspaces is a subspace of $V$, and the second one is because the eigenspaces have dimension at least $1$). This means that $m \le v-n$, but the number of eigenvalues is at most $m+1$, hence if $0$ is an eigenvalue we have $m + 1\le v-n+1$ and if not, we still have $m \le v-n \le v-n+1$.
Hope that helps, |
H: Telephone Number Checksum Problem
I am having difficulty solving this problem. Could someone please help me? Thanks
"The telephone numbers in town run from 00000 to 99999; a common error in dialling on a
standard keypad is to punch in a digit horizontally adjacent to the intended one. So on a
standard dialling keypad, 4 could erroneously be entered as 5 (but not as 1, 2, 7, or 8). No
other kinds of errors are made.
It has been decided that a sixth digit will be added to each phone number $abcde$. There
are three different proposals for the choice of $X$:
Code 1: $a + b + c + d +e + X$ $\equiv 0\pmod{2}$
Code 2: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{6}$
Code 3: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{10}$
Out of the three codes given, choose one that can detect a horizontal error and one that cannot detect a horizontal error.
"
AI: You want to know if it can handle finding one horizontal error.
What does a horizontal error entail? Any given number $x$ can only be replaced by either $x-1$ or $x+1$; sometimes, only one or the other (for instance, 4 can only be replaced by 5, not by 3).
In Code 1, $X$ is just the "parity" of the numbers - all it tells you is whether their sum is even or odd. Since a shift in any one spot can only by a shift by 1, it will change the parity - so that a horizontal error will yield
$$
a+b+c+d+e+X\pm 1\equiv (a+b+c+d+e+X)+1\equiv1\pmod{2}.
$$
So, single horizontal shift errors are always detected.
In Code 2, the value of $a$ has no impact on the checksum - for any valid $a$, $6a\equiv0\mod 6$. So, for instance, if $a=4$, and you accidentally press $5$ instead, then
$$
6\cdot5+5b+4c+3d+2e+X\equiv 6\cdot4+5b+4c+3d+2e+X\equiv0\pmod{6},
$$
and the error is not detected. So, we cannot guarantee that code 2 detects a horizontal error.
In code 3, the situation is better: an error in $a$ will lead to the checksum being congruent to $6$ or $-6\equiv4\pmod{10}$; an error in $b$ will lead to the checksum being congruent to $5$; etc. It will always detect the error. Unfortunately, it still cannot tell us where the error occurs; a shift up in $a$ or a shift down in $c$ both make the checksum congruent to $6\pmod{10}$.
So, Codes 1 and 3 will always detect a horizontal error; code 2 will not. And none will detect the position of the error. |
H: What test should I use to classify this series?
Classify
$$\sum_{n=1}^\infty \frac{ x^2+\cos^n x}{n^2}$$
as absolutely convergent, conditionally convergent or divergent.
I am not sure whether I should use integral test or comparison test.
Do you know what's the best test to classify this series?
Thanks in advance.
AI: Note:
$$\left | \sum_{n=1}^\infty \frac{ x^2+\cos^n x}{n^2}\right | \le \sum_{n=1}^\infty \frac{|x^2+\cos^n{x}|}{n^2} \le \sum_{n=1}^\infty \frac{ x^2+1}{n^2} = (x^2+1) \frac{\pi^2}{6}$$
Therefore the sum is absolutely convergent for any finite value of $x$ by the comparison test. Thanks to @N.S. and @vadim123 for correcting and clarifying the reasoning here. |
H: Equivalence classes - on $\mathbb{N}^2$
Let $R$ be the relation on $\mathbb{N}^2$ defined by
$(a,b)R(c,d)$ if $2a + 3b = 2c + 3d$
Write $4$ elements in the equivalence class of $(1,2)$
So I think I need to find all the pairs $(a,b)$ with $2a + 3b = 2(1) + 3(2) = 8$
But no other positive integers other than $(1,2)$ will satisfy that equation so obviously I'm doing something wrong.
Any suggestions?
AI: Your reasoning is spot on. The equivalence class consisting of all pairs $(a, b)$ that are equivalent to $(1, 2)$ under $R$, with the restriction that $a, b \in \mathbb Z^+$ consists only of $(1, 2)$ itself.
Otherwise, the set of pairs of integers (no restriction) which are in the equivalence class $\Big[(1, 2)\Big]$ is exactly $$\Big[( 1,2)\Big] = \{(a, b) \mid a = 1 + 3k \;\land \;b = 2-2 k,\;\; k \in \mathbb Z\} $$
As others have suggested, there must by some misprint involved here in the problem statement you've been given, or the instructor/author of the problem statement was having a very bad day and was careless in thinking the problem through. |
H: Mathematical symbol for "has"
Just out of curiosity, I was wondering if there was a symbol for "has" so intead of saying $x \in A$, we could say something like "$A$ has $x$", they both mean the same thing but I was just wondering if there was another way to say it.
Thanks!
AI: I believe I have seen $\ni$ used for this purpose. That is, $x\in A \iff A\ni x$ |
H: Standard Matrices for Linear Transformation
I'm not able to find an explanation for finding the standard matrix for a linear transformation of equations. For example, if I have;
$$w_1=2x_1-3x_2+x_4$$
$$w_2=3x_1+5x_2-x_4$$
Would the standard matrix just be:
$$
\begin{bmatrix}
2 & -3 & 0 & 1 \\
3 & 5 & 0 & -1
\end{bmatrix}
$$
AI: To find the standard matrix of a linear trasformation $T$ from $\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ one need to consider what $T$ does to the columns of $n\times n$ identity matrix, $I_n$. Because any vector in $\mathbb{R}^n$ can be obtained by scaling the columns of $I_n$ matrix.
So you should substitute $x_1, x_2, x_3, x_4 $ with the columns of $I_n$ which will give you the elements of the standard matrix.
Please have a look here http://dip.sun.ac.za/~hanno/twb264/lesings/matrix_lin_trans.pdf for more information.
Your answer is correct. |
H: Compactness Proof
I have a doubt respect this theorem
A compact subset $M$ of a metric space is closed and bounded.
Proof by my lecture:
For every $x\in \bar{M}$ there is a sequence $(x_n)$ in $M$ such that $x_n\longrightarrow x \cdots$. My question is Why this?. I know that if $x$ is a boundary point then exists $x_n\longrightarrow x$. But How I will be able to prove when $x$ is not boundary point?
AI: There will be a sequence $( x_n )$ such that $x_n \to x$ simply because if $x$ is not a boundary point, then it is in $M$, and we have the trivial sequence $x_n = x$.
If $x \notin M$, then $x$ is a boundary point. Then we can select $x_n \ne x$ such that $d(x_n,x) < \frac{1}{n}$. That this sequence should converge to $x$ is a consequence of the definition of convergence and the Archimedean property of the reals. |
H: Rings of fractions and homomorphisms
Let the ring of fractions be denoted as $S^{-1}R$.
Proposition (from Robert Ash's textbook "Basic Abstract Algebra")
Define $f: R \rightarrow S^{-1}R$ by $f(a) = a/1$. Then f is a ring homomorphism. If S has no zero divisors then f is a monomorphism, and we say that R can be embedded in $S^{-1}R$. In particular:
(i) A commutative ring R cna be embedded in its complete (or *full) ring of fractions ($S^{-1}R$, where S consists of all non-divisors of zero in R).
(ii) An integral domain can be embedded in its quotient field.
Why does the proposition say "If S has no zero divisors..."? If S really had zero divisors, then this wouldn't even be properly defined, right? Because we would have a/0 for some a in R.
AI: I don't know how your book defines the ring of fractions, but the right (i.e. the most general as possible) way of doing things with rings of fractions is this :
If $M$ is an $R$-module and $1 \in D \subseteq R$ is a subset of $R$ which is multiplicatively closed, then we define $D^{-1} R$ as follows. Define the following equivalence relation over $D \times M$ : $(d_1, m_1) \sim (d_2, m_2)$ if there exists $d \in D$ such that $d(d_1 m_2 - d_2 m_1) = 0$. You can check that this defines an equivalence class over $D \times M$ (the details are okay to work out, nothing hard), and in the case where $M = R$ and $R$ is an integral domain, you can get rid of the condition 'there exists a $d \in D$ such that' because it is not necessary.
Defining the addition as usual over $D^{-1} M$, this makes $D^{-1}M$ into an abelian group. In particular, since $R$ is an $R$-module over itself, we can define $D^{-1}R$. Considering the particular case of $D^{-1}R$ alone first, we can define multiplication as $\frac{r_1}{d_1} \frac{r_2}{d_2} = \frac{r_1 r_2}{d_1 d_2}$. This makes $D^{-1}R$ into a ring, so now we can say that the scalar multiplication $\frac{r}{d} \frac{m}{d'} = \frac{rm}{dd'}$ makes $D^{-1}M$ into a $D^{-1}R$-module.
(I used the letter $D$ because it stands for denominators. The letter $S$ is probably just used because of the alphabet...)
In this kind of generality, if you want to make $D^{-1}R$ into an integral domain, you need to make some assumptions on $D$. For instance,
Note that the set $D$ of all non-zerodivisors is a multiplicatively closed subset of $R$ which contains $1$. This means that if $\frac a1 = \frac b1$, there exists $d \in D$ such that $d(a-b) = 0$, but $d$ is not a zero-divisor, then $a-b =0$ and $a=b$, hence the remark that $f$ is a monomorphism in this case. If $D$ contains a zero divisor, you can have some fraction $\frac r1 = \frac 01$ without having $r=0$, because this equation only means that there exists $d \in D$ such that $rd = 0$. The equivalence class of $\frac 01$ contains precisely those $r \in R$ that are zero divisors.
In an integral domain, there are no zero-divisors, so by the above remark the map $f$ is an embedding of $R$ into its quotient field $D^{-1}R$ (where $D$ is the set of non-zero elements, which is also the set of non-zero divisors in this case). Note that $D^{-1}R$ is a field because a fraction $\frac rd$ is never equivalent to $\frac 0{d'}$ for some $d'$, hence we can take its inverse to be $\frac dr$.
Hope that helps! Feel free to ask any questions about the details. |
H: Finding the smallest subset of a set of vectors which contains another vector in the span
Consider a set $S=\{ \underline{v_1},\dots , \underline{v_n} \} $ of vectors of dimension $d<n$. Suppose for some vector $\underline{b}$ that the solution space for the matrix equation
$\left[ \underline{v_1} \dots \underline{v_n} \right] \underline{x} = \underline{b} $
is nonempty. I wish to find the smallest subset $S'=\{\underline{v_1'}, \dots, \underline{v_m'} \} \subseteq S$ such that the solution space of the equation
$ \left[\underline{v_1'} \dots \underline{v_m'} \right] \underline{x'} = \underline{b}$
is still nonempty.
Clearly $S'$ should be a linearly independent set, but that does not narrow down the choices much. Is there a way to solve this cleverly without examining the span of each possible subset of $S$?
Thoughts on a solution:
One could compute the space of solutions of the first equation, and then attempt to construct one solution with as many $0$ entries as possible, but I am not sure how one would go about doing this either.
Ultimately, I hope to employ this to find interesting relations between elements of a vector space by decomposing some special element "efficiently" over a basis.
AI: What you're looking for here is a "sparse solution" to a system of equations.
In other words, we're looking for the solution to the problem
$$
\left[\begin{array}{cccc}
v_1 & v_2 & \cdots & v_n
\end{array} \right]\vec x= A \vec x=\vec b
$$
So that the column vector $\vec x$ has the lowest possible number of entries. Another way to put this is that we're trying to minimize $\|\vec x\|_0$ (the number of non-zero entries, also called the "support" or "zero-norm" of $\vec x$) under the constraint that $A \vec x=\vec b$. This is a difficult and practical problem, which is the subject of current research. For more information on all that, I'd recommend looking into "compressed sensing" and "sparse recovery".
To give you a little bit of a heads up: solving this efficiently requires using some property of $A$ to your advantage. One such family of matrices is the family of matrices $A$ that satisfy the "restricted isometry property". If $A$ has a sufficiently small "RIP constant" (lower than $\frac13$) then the problem can be solved by minimizing $\|\vec x\|_1$ under the same constraints, which is a problem that can be solved by linear programming. |
H: Calculating the Roots of Sine
Aside from the obvious knowledge that the roots of $\sin x$ are all integer multiples of $\pi$, is there a formal, algebraic method to calculate the roots of trigonometric functions similar to the quadratic equation?
(e.g. roots of $\sin^2(ax) + \sin(bx) + c$ or some other non-trivial form)
AI: While trigonometric identities exist for multiples of an angle, perhaps the best-known being:
$$
\begin{align}
\sin 2x &= 2\sin x \cos x\\
\cos 2x &= \cos^2 x - \sin^2{x} =2\cos^2 x - 1 = 1 - 2 \sin^2x\\
\sin 3x &= 3\sin x - 4 \sin^3 x\\
\cos 3x &= 4\cos^3 x - 3\cos x
\end{align}
$$
there is no such general formula, and indeed no way to solve such equations, unless the trigonometric arguments are the same and therefore the expression can be factored. |
H: Is $\sum_{n=3}^\infty\frac{1}{n\log n}$ absolutely convergent, conditionally convergent or divergent?
Classify
$$\sum_{n=3}^\infty \frac{1}{n\log(n)}$$
as absolutely convergent, conditionally convergent or divergent.
Is it,
$$\sum_{n=3}^\infty \frac{1}n$$ is a divergent $p$-series as $p=1$, and
$$\lim_{n\to\infty} \frac{1}{n\log(n)}{n} = 0$$ by comparison test. And this converges to $0$.
So,
$$\sum_{n=3}^\infty \frac{1}{n\log (n)}$$
is conditionally convergence?
I'm not sure if I'm doing right or not. Could you guide me?
Thanks in advance! :)
AI: Note that if your given series is convergent then it's also absolutely convergent since $\frac{1}{n\log n}\geq 0\quad \forall n\geq 3$.
Now, since the sequence $(\frac{1}{n\log n})$ is decreasing to $0$ so by the integral test your series has the same nature that the improper integral
$$\int_3^\infty\frac{dx}{x\log x}=\left[\log(\log(x))\right]_3^{\to\infty}=\infty$$
hence the series is divergent. |
H: How to prove that $Z[i]/7$ is a field?
I know that $Q[i]$ is a field since we can find the inverse for each $a+bi$ namely $\frac{a}{a^2+b^2} - i\frac{b}{a^2+b^2}$. However, I am not sure how to do so for something like $Z[i]/7$ since we can only have non-negative integers less than 7 as coefficients.
AI: Hint: the same method of rationalizing the denominator works over any field where $\,-1\,$ is not a square, since then $\,a^2+b^2\neq 0,\,$ for else $\,b = 0 \,\Rightarrow\,a= 0,\,$ contra $\,a+bi\neq 0,\,$ and $\, b\neq 0\,\Rightarrow\, (a/b)^2 = -1,\,$ contra hypothesis that $\, -1\,$ is not a square (as is true in $\,\Bbb Z/7).$
More generally one can rationalize denominators of algebraic numbers by taking norms (multiplying by conjugates), or by reading off the inverse from a polynomial having it as root. |
H: Totally ordered sets
Let T be a totally ordered set that is finite. Does it follow that minimum and maximum of T exist?
Since T is finite, I believe there exists a minimal of T. From that it maybe able to be shown that the minimal is the minimum but not quite sure whether it is the right approach.
AI: Claim: A totally ordered set with at least one minimal element has a minimum element.
Proof sketch: Let $b$ be the minimal element. If $b$ were not in fact the minimum, then by definition of minimum, "$b \le a$ for all $a$" would be false. Pick some $a$ for which $b \le a$ is not true, and derive a contradiction using totality. |
H: $m(\alpha E)=\alpha m(E)$ ? for every Lebesgue measurable set and $\alpha >0$
Let's consider a Lebesgue measurable set $ E \subset \mathbb R$. And let's consider a positive constant $\alpha>0$. I want to know if it's always true $m(\alpha E)=\alpha m(E)$. Clearly $m$ denote the Lebesgue measure.
At least in the case where $mE=0$ or $\infty$ it's true and also in the case of $ E=[a,b]$ but I can't prove it or disprove it. Please help me. In the case that the result it's true I want to see some proof, if it's not, please I want to see a general family of sets that have that property.
AI: Let $|I_n|$ be a covering sequence of open intervals such that $m^*(E) \le \sum |I_n| \le m^*(E) + \epsilon$ where $m^*$ denotes outer measure.
Write each $I_n$ as $(a_n,b_n)$. Define $\alpha I_n := ( \alpha \cdot a_n , \alpha \cdot b_n )$. Clearly, $\{ \alpha I_n \}$ is a covering sequence of intervals for $\alpha E$, since $\alpha \cdot e \in \alpha I_n$ for some $I_n$.
Moreover:
$$m^*(\alpha \cdot E) \le \sum | \alpha I_n | = \alpha \sum |I_n| \le \alpha m^*(E) + \alpha \epsilon$$
$\epsilon$ is arbitrary so $m^*(\alpha \cdot E) \le \alpha m^*(E)$.
The reverse inequality follows by the fact that $\alpha^{-1} (\alpha E) = E$, so $m^*(E) \le \alpha^{-1} m^*(\alpha \cdot E )$ by the exact same proof as above. |
H: Calculate polyhedra vertices based on faces
I have some origami polyhedra which I know the type of faces it has and how they are connected (such as this torus) and I want to calculate the co-ordinates of the vertices to use as an input to script.
My question is how should I go about translating knowledge of the faces into the locations of the vertices? Are there geometric tricks I can apply? or are there any software libraries/tools I could use to help out here? To make thinks a bit more complicated I don't think all the faces are regular polygons, all the edges should be the same length and all the angles should be the same but being made of paper there is a little wiggle room.
AI: A standard tool to do this sort of thing is the polymake program, which google will find in seconds.
But from the description in your question (which is not very explicit) it seems that polymake would not work for you. For example, tori are not convex, and most things work on covex solds, and so on. |
H: Finite implies Quasi-finite
Please help if you know a proof or a good reference for the following fact (exercise 3.5, Hartshorne's text).
Fact. A finite morphism of schemes $f: X \rightarrow Y$ is quasi-finite.
Here, the definition of quasi-finite is taken as $f^{-1}(q)$ is finite for all $q \in Y$.
AI: Hartshorne indicates at some point that the fibre corresponds to $X \times_Y \operatorname{Spec} k(q)$. The morphism from this to $\operatorname{Spec} k(q)$ is still finite, so you just need to know that if $k$ is a field and $A$ a finite-dimensional $k$-algebra then $\operatorname{Spec} A$ is finite.
You could quote some theorem about Artinian rings, but the geometric language helps us give a proof. Some steps: (i) the points of $\operatorname{Spec} A$ are closed (ii) the irreducible components are just points (iii) $A$ is Noetherian. |
H: Suppose G is a finite and simple group which acts transitively on S. Given that $k \equiv |S| > 1$, prove $|G|$ divides $ k!$
I think they key point here is to prove that $G$ must be isomorphic to a subgroup of $S_k$ then we would be done.
I am quite lost in trying to do so. Since $G$ acts transitively on $S$, can't we just say directly that $G$ induces some type of permutation, so it is isomorphic to a subgroup of $S_k$? I know something is wrong with this argument, but I can't quite see it.
AI: If we have an action of a group $G$ on a set $X$, then we can immediately construct a group homomorphism $f:G\to S(X)$, with $S(X)$ the group of permutations of $X$.
If $G$ is simple then $f$ must be injective, so it is in fact an isomorphism from $G$ to a subgroup of $S(X)$.
If, moreover, $X$ is finite of cardinal $n$, then $S(X)$ is also finite and of cardinal $n!$. Using Lagrange's theorem, we see that the order of $G$, which is equal to the order of the image of $f$, divides $n!$. |
H: A translation and a negation in $\mathbb{C}$ generate the infinite dihedral group.
I'd like to show that the linear functions
$$ \varphi(z) = z+b, \;\;\; 0\neq b\in \mathbb{C}$$
$$ \psi(z) = -z+c, \;\;\; c\in \mathbb{C}$$
generate, under composition, a group isomorphic to $Dih_\infty$, the infinite dihedral group.
Now $Dih_\infty$ may be presented as follows:
$$ \langle r,s \, | \, s^2=1, srs=r^{-1}\rangle.$$
These relations are satisfied by $\varphi$ and $\psi$, since $\psi^2 = 1$, ($1$ stands for the identity function) and $\psi\circ \varphi \circ \psi = \varphi^{-1}$. How do I show no further relations hold ?
An alternative proof is also welcome.
AI: An idea:
$$\tau(z):=\psi\circ\phi(z)=\psi(z+b)=-(z+b)+c=-z+(c-b)$$
$$\tau^2(z)=\tau(-z+(c-b))=-\left[-z+(c-b)\right]+(c-b)=z$$
And clearly, $\;\phi=\psi^{-1}\tau=\psi\tau\;$ , thus our group is
$$\langle\;\psi\,,\,\tau\;;\;\psi^2=\tau^2=1\;\rangle=C_2*C_2\cong Dih_\infty$$
Of course, in order to be completely formal (or perhaps "too" formal) you may want to show that no finite product of the for $\,a_1b_1a_2b_2\cdot\ldots\cdot\;$ , with $\,a_i,b_i\in\{\psi\,,\,\tau\}\,\,,\,\,a_i\neq b_i\;$ , can be the identity map (trivial element) , but I think this is almost "trivial", since
$$\tau\circ\psi(z)=\tau(-z+c)=-(-z+c)+c-b=z-b\neq z\;,\;\;\text{since}\;\;b\neq 0$$
$$\text{and as already noted above,}\;\;\psi\circ\tau(z)=\psi(-z+c-b)=-(-z+c-b)+c=z+b\neq z$$ |
H: Optimal Solution Set To Linear Programs
I have the following assignment question, and I am not quite sure how to proceed.
Q: Consider the following LP (P): $\max\{{c^Tx:Ax=b, x \geq 0}\}$, where $A$ is an $m$ by $n$ matrix. Prove or disprove the following.
i) Let $D$ be any $m$ by $m$ matrix. Then the following LP has the same set of optimal solutions as (P): $\max\{{c^Tx:DAx=Db, x \geq 0}\}$.
ii) Let $D$ be a non-singular $m$ by $m$ matrix. Then the following LP has the same set of optimal solutions as (P): $\max\{{c^Tx:DAx=Db, x \geq 0}\}$.
iii) Let $y$ by any vector in $R^m$ and $D$ be any $m$ by $m$ matrix. Then the following LP has the same set of optimal solutions as (P): $\max\{{(c^T -y^TA)x + y^Tb:Ax = b, DAx=Db, x \geq 0}\}$.
Sol: So for part i), I have that the statement is false. It's quite easy to construct an LP with a unique optimal solution, and then picking D to be the zero matrix. Thus the resulting LP would be unbounded, as any $x$ would be feasible. So the optimal solution sets are clearly different.
For ii), I believe that the statement is true. I haven't found a way to prove it yet, but I believe that the reason it would be true would be because the feasible solution space to $Ax = b, x \geq 0$ is the same as $Dax = Db$, since $D$ is invertible. So any $x$ satisfying one set of constraints must also satisfy the other (which wasn't the case in part i). Also since there is no change to the objective function, the optimal solution sets are identical.
For iii) I am stuck. Further, I don't really understand the point of introducing the vector $y$, as if $Ax = b$, then $y^TAx = y^Tb$, and so in the objective function $(c^T - y^TA)x + y^Tb$ would be the same as just $c^Tx$. Also, I don't really know where to go from here. I can't seem to construct a counter example, but can't seem to prove it either.
AI: You are correct for (i).
Let $\Omega_P = \{x | Ax = b, x \ge 0 \}$.
For (ii), let $\Omega_{ii} = \{x | DAx = Db, x \ge 0 \}$. Since $D$ is invertible, we have $Ax=b$ iff $Ax-b = 0$ iff $D(Ax-b) = 0$ iff $DAx = Db$. Hence $\Omega_P = \Omega_{ii}$ and so the problems are equivalent (same cost and same feasible set).
For (iii), let $\Omega_{iii} = \{x | Ax=b, DAx = Db, x \ge 0 \}$. It is clear that $\Omega_{iii} \subset \Omega_P$. If $x \in \Omega_P$, then $Ax=b$ and so $DAx=Db$, hence $x \in \Omega_{iii}$. Hence $\Omega_{iii} = \Omega_P$. If $x \in \Omega_P = \Omega_{iii}$, then $y^T(Ax-b) = 0$ for all $y$, and hence $c^Tx = c^T x-y^T(Ax-b) = c^T(x-y^TA) + y^Tb$. Hence the two problems are equivalent (the cost functions are the same on the same feasible set). |
H: Is $\omega_1 ^\omega$ countably compact?
Give $\omega_1$ the order topology, and then $\omega_1 ^\omega$ the product topology.
$\omega_1$ is countably compact, but what about this product?
I attempted to prove it in two different ways, but each time something goes wrong. How about for arbitrary powers of $\omega_1$?
AI: It’s not hard to prove that the space $\omega_1$ is sequentially compact. The product of countably many sequentially compact spaces is sequentially compact, so $\omega_1^\omega$ is sequentially compact. Finally, in first countable spaces sequential compactness is equivalent to countable compactness, and $\omega_1$ and $\omega_1^\omega$ are first countable, so $\omega_1^\omega$ is countably compact.
Added: As Henno Brandsma reminds me, $X^\kappa$ is countable compact for every cardinal $\kappa$ iff there is a free ultrafilter $\mathscr{U}$ on $\omega$ such that $X$ is $\mathscr{U}$-compact; the result can be found (as part of Theorem $4.11$) in Jerry E. Vaughan, ‘Countably Compact and Sequentially Compact Spaces’, Handbook of Set-Theoretic Topology, K. Kunen & J.E. Vaughan, eds., North-Holland, $1984$. ($X$ is $\mathscr{U}$-compact iff each sequence in $X$ has a $\mathscr{U}$-limit, where $x$ is the $\mathscr{U}$-limit of $\langle x_n:n\in\omega\rangle$ iff $\{n\in\omega:x_n\in V\}\in\mathscr{U}$ for each nbhd $V$ of $x$.) Theorem $4.9$ of the same survey says that a space is $\mathscr{U}$-compact for all free ultrafilters $\mathscr{U}$ on $\omega$ iff it is $\omega$-bounded, meaning that the range of each sequence in $X$ is contained in some compact set. The space $\omega_1$ is clearly $\omega$-bounded, so $\omega_1$ is $\mathscr{U}$-compact for all $\mathscr{U}\in\beta\omega\setminus\omega$, and therefore $\omega_1^\kappa$ is countably compact for all $\kappa$. |
H: Counter examples on Categories
I'm reading Categories for the Working Mathematician by Saunders Mac Lane. At the section 5 from chapter 1, for a fixed category, he claims that every arrow with right inverse, is epic (right cancellable). He claims also that the converse is true in the category of Sets, but fails in the category of Groups. I tried by myself to find a pair of groups and an arrow having these properties, but I cannot find them.
I also read that a given group, regarded as a category with one element, one arrow per element of the group, and the composition of arrows representing the group product, is not a concrete category. How can I prove that?
Thanks in advance. Every help would be very appreciated.
AI: If $f:Q_8 \to \{\pm 1\}$ is defined by $\{1,-1,i,-i\} \mapsto 1$ and $\{j,-j,k,-k\} \mapsto -1$ then $f$ is epic, but it has no right inverse, that is, there is no homomorphism $h: \{\pm 1 \} \to Q_8$ so that $f(h(-1))=-1$. This is simply because there are only two homomorphisms from $\{\pm 1\}$ to $Q_8$, $-1 \mapsto \pm 1$, but $f(\pm1)=1 \neq -1$.
An abelian example is $f:\mathbb{Z}/4\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z}:x+4\mathbb{Z} \mapsto x+2\mathbb{Z}$. It is epic (in any concrete category containing it), but has no right inverse since there are (at most) two $h:\mathbb{Z}/2\mathbb{Z} \to \mathbb{Z}/4\mathbb{Z}$, $h(n+2\mathbb{Z})=0+4\mathbb{ZZ}$ and $h(n+2\mathbb{Z}) = 2n + 4\mathbb{Z}$. However $f(h(1+2\mathbb{Z})) = 2n + 2\mathbb{Z} = 0 +2\mathbb{Z} \neq 1+2\mathbb{Z}$ in both cases, so $f$ has no right inverse.
Most algebraic categories are like this: not being a zero-divisor is different from being a unit. Not all epics split.
I believe a one object category in which all arrows are invertible (a “group”) is always a concrete category insofar as there is a faithful functor to the category of sets. My view of the object is as the set containing the group elements, and the arrows as either the left or right multiplication maps, which makes it a concrete category as well. |
H: Statement similar to Fermat's last theorm and Beal's conjecture: $A^x+A^y=A^z$
If $A^n+B^n=C^n$ is Fermat's last theorem and $A^x+B^y=C^z$ is Beal's conjecture , then what is $A^x+A^y=A^z$ ? Is there any conjecture like this? Just curious to know.
AI: Fermat's last theorem is not $A^n+B^n=C^n$, but rather
There are no positive integers $A,B,C,n$ with $n>2$ such that $A^n+B^n=C^n$.
Beal's conjecture is not $A^x+B^y=C^z$, but rather
If $A,B,C,x,y,z$ are positive integers and $x,y,z>2$ such that $A^x+B^y=C^z$, then $A,B,C$ have a common prime factor.
The treatment of integer solutions of $A^x+A^y=A^z$ with positive integers $A,x,y,z$ is trivial in comparison: If $A=1$, we get $1+1=1$, so no solution. If $A>1$, then clearly $z>y$ and $z>x$ and wlog. $y\ge x$ so that after dividing by $A^x$ we get $1+A^{y-x}=A^{z-x}$. The right hand side is a multiple of $A$ because $z>x$. Hence $A^{y-x}$ must noit be a multiple of $A$, which implies $x=y$. But then we get $2=A^{z-x}$, i.e. $A=2$, $z=x+1(=y+1)$ as only solutions. |
H: $1^k+2^k+\cdots+(p-1)^k\equiv 0 \pmod{p}$
How can I show this equation
$$1^k+2^k+\cdots+(p-1)^k\equiv 0 \pmod{p},$$
where $k$ is integer that $p-1\nmid k$ and $p$ is odd prime.
AI: Another way of presentation is too choose such $a$ that $a^k\ne 1(\mod p)$ and
note that $0,a,2a,... (p-1)a$ form complete residue system modulo $p.$ Multiplying the sum by $a^k$ we arrive at
$S=a^kS(\mod p)$ and thus $S=0 (\mod p).$
P.S. Such $a$ exists for any prime $p,$ one may take primitive root $\mod p.$ |
H: Find $c$ which makes $cA$ is an orthogonal projection on $A$
$A=\begin{pmatrix} 2&-1&-1\\-1&2&-1\\-1&-1&2\end{pmatrix}$
$c>0$ and $B=cA$. Find $c$ which makes $B$ is an orthogonal projection on $A$.
Hmmm.....I first find the orthogonal eigenvectors of $A$...
Am I going to the right way?
AI: Hint: If $B$ is a projection (orthogonal or not), we must have $B^2=B$. And if $B$ is a real symmetric projection matrix, it has to be a orthogonal projection (why?). |
H: Mathematical Induction with Inequalities
$ P(n) = n < 3^n - 4 $ for all $ n \ge 2$
Base case: $2 < 3^2 - 4$
$2 < 5$
Inductive step: Assume true for $n = k$, show true for $n = k + 1$
That is, assume $k < 3^{k} - 4$, and show $k + 1 < 3^{k + 1} - 4$
So,
(This is where I might be wrong)
$k + 1 < 3^k + 1 - 4$ (by IH) $\le 3^k + 3^k - 4 = 3^{k + 1} - 4$
Is this a valid proof? I guess I don't understand induction with inequalities very well.
AI: You're very close! Your last equality was incorrect, though. Instead, $$3^k+1-4\le3^k+3^k+3^k-4=3^k\cdot 3-4=3^{k+1}-4.$$ |
H: $\lim_{n\rightarrow\infty}n(\ln n)a_n=0$ implies $\sum a_n$ converges?
Is it true that
If $$\lim_{n\rightarrow\infty}n(\ln n)a_n=0,$$
then the series
$$\sum_{n=1}^\infty a_n$$
converges?
If so, I want to know the proof.
If not, I want to know the counter example.
AI: The series $$\sum_{n=4}^\infty \frac{1}{n\log n \log\log n}$$ does not converge. Use the integral test. |
H: Order preserving maps
Suppose $f:X \to Y$ is order preserving. Let $A$ be a subset of $X$.
Does is follow that if $A$ is well ordered then $f(A)$ is well ordered?
AI: Yes. If $S$ is a nonempty subset of $f[A]$, let $T=\{x\in A:f(x)\in S\}$. Then $T$ has a least element $t$, and $f(t)$ is clearly minimal in $S$. |
H: difference between 2 prime numbers
We have to prove that if the difference between two prime numbers greater than two is another prime,the prime is $2$.
It can be proved in the following way.
1)$Odd -odd =even$.
Therefore the difference will always even.
2)The only even prime number is $2$.Therefore the difference will be $2$ if the difference between primes is another prime.
I am looking for more proofs to this theorem.Any help will be appreciated.
AI: The proof you provided is fine way of proving the proposition. Here is an alternate proof, set up as a contradiction,
Suppose that the difference between two odd primes $a=2n+1$ and $b=2m+1$ is an odd prime, $c$.
$$a-b=c \implies (2n+1)-(2m+1)=c \implies 2(n+m)=c$$
Therefore, $2|(a-b)$, a contradiction.
It is therefore impossible for the difference of two odd primes to be an odd. This means that the difference of two odd primes must be even. The only even prime is 2.
So, if the difference of two odd primes is a prime, then it must be two. |
H: $|s_{2k}-s_k|<\epsilon$ implies $\{s_k\}$ converges?
$\{s_k\}$ is a sequence in $\mathbb{R}$.
$\forall\epsilon>0$, $\exists N\in\mathbb{N}$ s.t. $k\geq N$ implies $|s_{2k}-s_k|<\epsilon$.Then, does $\{s_k\}$ converge or not?
I need a proof. Thanks.
AI: It may not converge. For example, let $s_k=1$ when $k=2^n$ for some $n\ge 0$ and let $s_k=0$ otherwise. Then $s_{2k}-s_k=0$ for every $k\ge 1$, and clearly $s_k$ does not converge.
Edit: Even if $s_k$ is nonnegative and increasing, it could also diverge to $+\infty$. For example, let $s_1=0$ and $s_k=\log(\log (k))$ for every $k\ge 2$. Then when $k\ge 2$,
$$s_{2k}-s_k=\int_{\log k}^{\log (2k)}\frac{dx}{x}<\frac{\log 2}{\log k}\to 0, $$
as $k\to\infty$. However, $\lim_{k\to\infty}s_k=+\infty$. |
H: Factor $x^5-1$ into irreducibles in $\mathbb{F}_p[x]$
I have to factor the polynomial $f(x)=x^5-1$ in $\mathbb{F}_p[x]$, where $p \neq 5$ is a generic prime number.
I showhed that, if $5 \mid p-1$, then $f(x)$ splits into linear irreducible.
Now I believe that, if $5 \nmid p-1$ but $5 \mid p+1$, then $f(x)$ splits into three irreducible polynomial, one of degree $1$ and two of degree $2$.
Oterwise, if $5 \nmid p-1$ and $5 \nmid p+1$ (so $p \mid p^2+1$), then $f(x)$ splits into two irreducible factors, one of degree $1$ and one of degree $4$.
How can I prove this statements (if I'm right, well...)? Thank you!
AI: For $p\neq 5$, it is always true that
$$p^4\equiv 1 \textrm{ mod }5$$
On the other hand,
$$p^4-1=(p-1)(p+1)(p^2+1)$$
If $5|p-1$, then roots of $x^5-1=0$ are included in $\mathbb{F}_p^{*}\simeq\mathbb{Z}/(p-1)\mathbb{Z}$. So, you don't need an extension field to include roots. Thus, $x^5-1$ factors into linear polynomials.
If $5\nmid p-1$, but $5|p+1$, then $5|p^2-1$, and all roots of $x^5-1=0$ are in $\mathbb{F}_{p^2}^{*}\simeq\mathbb{Z}/(p^2-1)\mathbb{Z}$, but not in $\mathbb{F}_p^{*}$. So, you need degree 2 extension of $\mathbb{F}_p$ to include roots. Thus, $x^5-1$ factors into $(x-1)$ and two quadratic irreducible polynomials.
Finally, if $5\nmid p^2-1$, and $5\mid p^2+1$, then all roots of $x^5-1=0$ are in $\mathbb{F}_{p^4}^{*}\simeq\mathbb{Z}/(p^4-1)\mathbb{Z}$, but not in $\mathbb{F}_{p^2}^{*}$. So, you need degree 4 extension. Thus, $x^5-1$ factors into $(x-1)$ and degree 4 irreducible polynomial. |
H: Solving a set of two polynomial equations
For a given $a,b,c,d$ in $\mathbb{R}$, I want to prove that if
$$ac-bd=0 \quad \text{and} \quad ad+bc=0$$
then $a=b=0$ or $c=d=0$.
I am able to prove this in a long and cumbersome way, and I'm sure that there is a better way to prove that.
Many thanks.
Gil.
AI: Multiply $ac=bd$ by $d$ and $ad+bc=0$ by $c$ then
$$bd^2+bc^2=b(c^2+d^2)=0$$
So, either $c=d=0$ or $b=0$. Take the second case then we get $ac=0$ and $ad=0$. So either $a=0$ or $c=d=0$. The second option brings us back to our first case. |
H: Proof that the Dirichlet function is discontinuous
I think I don't understand how it works.. I found some proofs.. okay, let's see:
Well I'd like to show that the function,
$$f(x) = \begin{cases} 0 & x \not\in \mathbb{Q}\\ 1 & x \in \mathbb{Q} \end{cases}$$
is discontinuous.
Now with epsilon-delta-definition:
Let's choose an $\varepsilon < 1$, for example $\varepsilon := 1/2$. And: $\delta > 0$ .
I have to show that $|f(x) - f(x_0)| > \varepsilon$
So if $x_0 \not\in \mathbb{Q}$ let $x$ be rational in $(x_0- \delta, x_0+ \delta)$
if $x_0 \in \mathbb{Q}$ let $x$ be irrational in $(x_0- \delta, x_0 + \delta)$
In the end it is $|f(x) - f(x_0)| = 1 > 1/2$.
Why is $|f(x) - f(x_0)| = 1 $ ?
AI: There always exist two distinct points: one is rational and the other is irrational in the interval $(x_0-\delta, x_0+\delta)$. By the definition of Dirichlet function, we can get the conclusion. |
H: Can a topological space satisfying the first axiom of countability, which is not Hausdorff but every sequence has a unique limit, exist?
My textbook (Principles of General Topology, by Pervin) says "A topological space $X$ satisfying the first axiom of countability is a Hausdorff space iff every convergent sequence has a unique limit."
Let $<x_{n}>$ be a sequence converging to $x\in X$. Let there be another point $y\in X$. By the first axiom of countability, for every open set $G$ containing $x$, there is a $B_{n}(x)\subseteq G$, and for every open set $H$ containing $y$, there is a $B_{n}(y)\subseteq H$. Also, for every open set containing $x$, for a certain N, all $x_{j}$, $j\geq N$ lie in that open set.
Let every open set containing $y$ intersect every open set containing $x$, and let the intersection consist of points other than points of the sequence $<x_{n}>$. Then, this is not a Hausdorff space, but the sequence $<x_{n}>$ still tends to the unique limit $x$ (provided, for every other point in $X$ apart from $y$, we assume there is at least one open set containing that point which does not intersect with an open set containing $x$. Hence the sequence can't have any of those points as its limit)!
Is this not a contradiction then? We have created a space satisfying the first axiom of countability, in which every sequence tends to a unique limit, but the space is not Hausdorff?
One possible contradiction I see is if we can define a sequence in the intersections between the open sets containing $x$ and $y$. All these intersections will intersect amongst themselves if ANY two open sets containing $x$ and $y$ intersect, and hence such a sequence may be possible. It would be great if someone could tell me whether this is the reason why the theorem stands true.
Thanks a lot!
AI: It’s not true that every sequence in $X$ tends to a unique limit. Suppose that every open nbhd of $x$ intersects every open nbhd of $y$, let $\{B_n(x):n\in\Bbb N\}$ be a countable local base at $x$, and let $\{B_n(y):n\in\Bbb N\}$ be a countable local base at $y$. We may assume that these bases are nested, i.e., that $B_{n+1}(x)\subseteq B_n(x)$ and $B_{n+1}(y)\subseteq B_n(y)$ for each $n\in\Bbb N$.
For each $n\in\Bbb N$ we know that $B_n(x)\cap B_n(y)\ne\varnothing$, so there is a point $z_n\in B_n(x)\cap B_n(y)$; I claim that the sequence $\langle z_n:n\in\Bbb N\rangle$ converges to both $x$ and $y$. Let $U$ be any open nbhd of $x$; then there is an $m\in\Bbb N$ such that $B_m(x)\subseteq U$, and for each $n\ge m$ we then have
$$z_n\in B_n(x)\subseteq B_m(x)\subseteq U\;.$$
This shows that $\langle z_n:n\in\Bbb N\rangle$ converges to $x$. An exactly similar argument shows that it also converges to $y$. Thus, there is at least this one sequence in $X$ that converges to two different limits, and that’s all that we had to show. |
H: Existence of limit of a function of a complex variable.
Say that I have a function $f(z)=f(x+iy)=f(x,y)$ and I want to investigate whether the limit at a particular point, say $z=0$ exists.
I recall that within the domain of real numbers, I checked whether the "right" and "left" limits were the same.
Now is it possible to do this in a similar way when $z$ is complex?
e.g. Is it enough to let $z = re^{i \phi}$ and check whether the limit when $r$ goes to $0$ is independent of $\phi$?
Many thanks.
AI: Unfortunately, in $\mathbb{C}$ (or really $\mathbb{R}^2$ here) it won't be enough to check any particular "direction". You have to handle a general sequence.
For an counterexample, consider (very discontinuous) function $f$, such that $f(1/n, 1/n^2) = 1$ but for all other arguments $f(x,y) = 0$. Then clearly limit of $f$ does not exist at $0$, but for any fixed $\phi$ you have $f(r e^{i\phi}) = 0$, except for at most one value of $r$.
That being said, by passing to subsequences, you can generally assume that $x_n \to 0$ either from above or from below, and likewise for $y$. More generally, if you partition $\mathbb{C}$ into finitely many pieces, you can work with sequences whose all elements fall into one piece. |
H: Proof about Holomorphic functions in the unit disc
We want to prove the following:
If $f$ is a holomorphic function on the unit disc $\mathbb{D}$ s.t. $f(z) \neq 0$ for $z \in \mathbb{D}$, then there is a holomorphic function $g$ on $\mathbb{D}$ such that $f(z)=e^{g(z)}$ for all $z \in \mathbb{D}$.
For such a $g$, we have that $\frac{f'(z)}{f(z)}=g'(z)$ on $\mathbb{D}$ since $f$ has non-zero derivative.
Perhaps then we define $g(z)=\int_\gamma \frac{f'(z)}{f(z)}dz$ where $\gamma$ is a path in $\mathbb{D}$ from $0$ to $z$.
Does this work?
AI: As far as I am able to tell, your idea is correct. You can define $g$ by:
$$ g(z) = \int_\gamma \frac{f'(z)}{f(z)} dz$$
because $\mathbb{D}$ is simply connected and the integral does not depend on the path. Or even more simply, just expand $\frac{f'(z)}{f(z)} = \sum_{n} a_nz^n$ and put $g(z) = \sum_{n} \frac{a_n}{n+1} z^{n+1}$. In any case, you find a holomorphic $g$ such that $g'(z) = \frac{f'(z)}{f(z)}$.
Put $h(z) = e^{g(z)}$; this is again holomorphic. Then $\frac{h'(z)}{h(z)} = \frac{f'(z)}{f(z)}$ by a simple computation. Let $r(z) := \frac{f(z)}{h(z)}$; one can compute that:
$$ r'(z) = \frac{f'(z)h(z) - h'(z)f(z)}{h(z)^2} = 0 $$
Hence, $r(z)$ is a constant function. As a consequence, $f(z) = c h(z)$ for some constant $c$. Writing $c = e^{\alpha}$ (which is always possible) you get:
$$ f(z) = e^{\alpha + g(z)} $$
so $\tilde{g}(z) := \alpha + g(z)$ is the sought function. |
H: How can I solve this differential equation?
Consider the differential equation
$x^2y'' + a\,x\,y' + b\,y = 0 \text{ where } y = y(x) \text{ and } a,b \in R$
Using the change of variable $u = \ln(x)$, how can I transform the differential equation in the form of?
$Z'' + \alpha Z'+ \beta Z = 0 \text{ where } Z = Z(u)$
And what are the values of $\alpha \text{ and }\beta$ as a function of a and b?
Thanks in advance
AI: For $u=\ln{x}$ we get:
$$y'_x=y'_uu'_x=y'_u\frac{1}{x},$$
$$y''_{xx}=(y'_x)'_x=y'_u(-\frac{1}{x^2})+y''_{uu}\frac{1}{x^2}.$$
The differential equation becomes
$y''-y'+ay'+by=0$, where $y=y(u)$. Values $\alpha$, $\beta$ are
$$\alpha=a-1$$ and
$$\beta=b.$$
Finally, the roots of characteristic equation $\lambda^2+(a-1)\lambda+b=0$ are
$$\lambda_1=\frac{1-a+\sqrt{(a-1)^2-4b}}{2},$$
$$\lambda_2=\frac{1-a-\sqrt{(a-1)^2-4b}}{2},$$
so solution to the differential equation is
$$y(u)=C_1 e^{\lambda_1 u}+C_2 e^{\lambda_2 u},$$
or
$$y(x)=C_1 x^{\frac{1-a+\sqrt{(a-1)^2-4b}}{2}}+C_2 x^{\frac{1-a-\sqrt{(a-1)^2-4b}}{2}}.$$ |
H: Vector analysis: $(\vec v \cdot \vec \nabla) \vec v=(\vec \nabla \cdot \vec v) \vec v$?
If I know that $\vec \nabla \cdot \vec v=0$, can I say that:
$$( \vec v \cdot \vec \nabla )\vec v=\underbrace{(\vec \nabla \cdot \vec v)}_{=0} \vec v=0 $$ ?
Note: this is a question I asked in Physics StackExchange but as it is mainly mathematical I thought it could be relevant to post it here too.
AI: No. Try $v := (x,-y)$. Then $\nabla \cdot v = 0$ but $(v\cdot \nabla) v = (x,y)$. Problem is $(v \cdot \nabla) v \neq (\nabla \cdot v) v$ in general.
Edit: There seems to be some confusion as to what $v \cdot \nabla$ means, especially from the comments of the OP who seem to suggest that he/she treats $v$ and $\nabla$ as vectors which could be freely multiplied together in any order.
In the above example, $v\cdot\nabla$ is the operator $x \frac{\partial}{\partial x} - y \frac{\partial}{\partial y}$ which, being an operator, does not make sense unless you apply it to something. On the other hand, $\nabla \cdot v$ means apply the operator "$\nabla \cdot$" to $v$, resulting in the scalar function $\frac{\partial x}{\partial x} + \frac{\partial (-y)}{\partial y} = 0$. |
H: Can this logic about locating a point uniquely without using "-" be challenged?
I was just exploring a possibility of locating point without using "-" ( negative ) sign.
( Actually negative sign confuses me a lot when understanding the basics of coordinate geometry )
So, here is the line where actually
c1 = -1
c0 = 0
c2 = +1
-------------c1------------c0-------------c2--------------
To tell any person standing on "c0" to go to c1 , i can tell go to [-1,0] vector or 1 unit left. Similarly to go to c2, i can say go to 1 unit right, or simply [1,0] vector.
Instead of using negative sign, can the following information help a person standing on "c0", to identify c1 and c2 without confusion :
That is,
1) c0 is 1 unit away from c1
2) c0 is 1 unit away from c2
3) c1 is 2 unit away from c2
The good thing is, this does not involve any negative sign. Can this be challenged for ambiguity by the standing person ?
AI: I think I understand what you're saying - you want to describe the location of points without giving explicit coordinates.
Note that your description (like any description involving only distances) cannot fix which way round all the points are. For example, switching $c_1$ with $c_2$ leaves all the distances unchanged, as does spinning everything around in the plane.
As for minus signs: this is exactly the same as a number line - are you comfortable with the idea of negative numbers stretching off to the left while positive ones go right? |
H: Proving a limit exists for the next multi-variable function:$f(x,y)=\frac{x^3+y^3}{x^2+y^2}$.
Proving a limit exists for the next multi-variable function: $f(x,y)=\frac{x^3+y^3}{x^2+y^2}$.
I know it's pretty much the basics, But i do not undetstand how to prove whether a limit exists.
What i did so far was:
Let $x=r\cos\alpha$ and $y=r\sin\alpha$.
I'll prove $\lim_{r\to0} f(r)=f(r)$
So i'll get $\lim_{r\to0}r\frac {\cos^3\alpha+\sin^3\alpha}{\cos^2\alpha+\sin^2\alpha}=\lim_{r\to0}r\frac {\cos^3\alpha+\sin^3\alpha}{1}=0 $
Is the above enough for proving limit exists? Overall, What are the different ways of proving a limit of a two variable exists?
AI: Yes, that's enough. Converting to polar coordinates is usually the easiest or most efficient way to prove existence of a limit. Other methods including using the formal $\epsilon$-$\delta$ definition or the Squeeze Theorem.
To prove that a limit does NOT exist, one easy method is to pick two different paths that approach the given point that result in different limits. |
H: A Catalan-like counting of walks of length $n$ on $\mathbb{Z}$
I would like to count the number of walks of length $n$ on $\mathbb{Z}$ starting at $0$, where in each step you move either one left or one right, such that you never land on a negative integer (i.e. you can't go left more times than you go right at on any given step). Notice that I don't require the walk to terminate at $0$, just at a non-negative integer. Clearly, if we denote $$C_{s,t}=\frac{s-t+1}{s+1}\binom{s+t}{t}$$ then $C_{s,t}$ is the number of such walks with $n=s+t$ in which there are exactly $s$ steps right (and hence $t$ steps left). The the number I look for will be
$$\sum_{t=0}^{\left\lfloor\frac n2\right\rfloor}C_{n-t,t}$$
I found in this forum that this number is also equal to $$\binom{n}{\left\lfloor\frac n2\right\rfloor}$$
My question is: why is it true - it is mentioned there that there exists a bijection between my desired walks and walks of length $n$ with exactly $\left\lfloor\frac n2\right\rfloor$ steps left, but no restriction on staying non-negative. Can anyone show me an explicit bijection between the two or an another way to see this equality?
AI: There is a simple bijection that maps such walks to unconstrained walks that end at $0$ or $1$ (depending on the parity of $n$). First mark in the original (constrained) walk the positive (rightward) steps that are the last ones to leave from a particular element of $\Bbb Z$; if the path ends in $k$ then there are precisely $k$ such steps. Now reverse the direction of the first $\lfloor\frac k2\rfloor$ marked steps, which will move the end point back $2\lfloor\frac k2\rfloor$ units so that it becomes $k\bmod 2=n\bmod 2\in\{0,1\}$.
The reason this is a reversible operation is that after the modification, the step that have been reversed can be recognised at the first steps that reach the point $-i$, for $i=1,2,\ldots,\lfloor\frac k2\rfloor$, and that $-\lfloor\frac k2\rfloor$ is the most negative value reached by the modified walk. |
H: How to prove that $\frac{1}{n+1} = {\sqrt{n\over n+1}}\implies n = \Phi$?
Consider:
$$\frac{1}{n+1} = {\sqrt{n\over n+1}}$$
How could one prove that $n$ is of such form that:
$$\frac{1}{n+1} = {\sqrt{n\over n+1}}\implies n = {\sqrt{5\,}-1 \over 2} \implies n = \Phi$$
where $\Phi$ denotes the golden ratio?
Edit:
I noticed that:
$$\frac{1}{n+1} = {\sqrt{n\over n+1}} \implies \left ( \frac{1}{n+1}\right ) ^2 = \left ({\sqrt{n\over n+1}}\right)^2\implies$$
$${1\over(n+1)^2} = {n\over(n+1)}\implies n = {1\over n+1} \implies$$
$${1\over{1+{1\over1+n}}} \approx \Phi$$
as $$\Phi = {1\over{1+{1\over{1+{1\over\dots}}}}}$$
which should be proof enough.
AI: HINT:
Just square both sides to get $$\frac1{(n+1)^2}=\frac n{n+1}\implies n^2+n-1=0$$ assuming $n+1\ne0$
Alternatively, $\sqrt{n+1}=(n+1)\sqrt n\implies 1=\sqrt{n(n+1)}$ assuming $n+1\ne0$
Now, square both sides to get $n^2+n=1$
So, $n=\frac{-1\pm\sqrt5}2$
As $\sqrt{\frac n{n+1}}$ is considered to be $\ge 0,\frac1{n+1}\ge 0\implies n\ge -1$
Now, as $-\frac{1+\sqrt5}2<-1,$ it is an extraneous root |
H: Derivative of $\frac{\cos(y)^4}{x^4}$ using the quotient rule
I have to find the derivative of $\frac{\cos(y)^4}{x^4}$ in order of $x$. Using the quotient rule I'm making $$\frac{(\cos(y)^4)'x^4-(x^4)'\cos(y)^4}{x^{4^2}}$$
This gives me $\frac{0-4x^3\cos(y)^4}{x^{16}}\Rightarrow -\frac{4\cos(y)^4}{x^{13}}$ but Wolfram is giving $-\frac{4\cos(y)^4}{x^5}$.
Wolfram is using the power rule as I'm using the quotient rule but nevertheless I believe that the result should be the same so what am I doing wrong?
AI: Since the numerator is independent of $x$, you can use the elementary power rule and obtain
$$\left(\frac{\cos^4{y}}{x^4}\right)'=-4\cdot\frac{\cos^4{y}}{x^5}.$$
However, the quotient rule is still valid, but note that $(x^4)^2=x^8$. |
H: understanding probability distribution notation
Assume that $\mu$ is a probability distribution on $[n]$, let $A\subseteq[n] $ be a probability event. What does it mean : $Pr_\mu[A]$ ?
AI: This is $\mu(A)=\sum\limits_{k\in A}\mu(\{k\})$. |
H: Expected number of coin tosses before $k$ heads
If the probability of getting a head is $p$, how do you compute the expected number of coin tosses to get $k$ heads?
I thought this might be the mean of the negative binomial distribution but this gives me $pk/(1-p)$ which is $k$ for $p=1/2$ which can't be right.
AI: Let $X_1$ be the number of coin tosses until the first head, $X_2$ be the number of coin tosses after that until the second head, and so on. We want $E[X]$ where $X = X_1 + X_2 + \dots + X_k$. By linearity of expectation, we have $E[X] = E[X_1] + E[X_2] + \dots + E[X_k]$.
For any $i$, the number $E[X_i]$, the expected number of coin tosses until a head appears, is $\dfrac{1}{p}$. You can see this by calculating the mean of the geometric distribution, or by noticing that $E[X_i] = 1 + (1-p)E[X_i]$, etc.
All this gives $E[X] = \dfrac1p + \dfrac1p + \dots + \dfrac1p = \dfrac{k}p$. |
H: How to interpret $(V^*)^*$, the dual space of the dual space?
Suppose $V$ is a real vector space.
Then $V^*$, its dual space, is the vector space of linear maps $V\to \mathbb R$
How then do I interpret $(V^*)^*$, the dual space of the dual space?
AI: The space of linear maps $\ell : V \rightarrow \mathbb{R}$ is itself a vector space, with pointwise addition and scalar multiplication of functions. Thus, $(V^*)^*$ is the dual of this vector space.
There is a canonical linear transformation $\xi : V \rightarrow (V^*)^*$ defined by $\xi(v) = \xi_v$, where $\xi_v : V^* \rightarrow \mathbb{R}$ is the linear map given by $\xi_v(\ell) = \ell(v)$. The map $\xi$ is injective, so when $V$ is a finite dimensional vector space, the map $\xi$ is a (canonical) isomorphism $V \cong (V^*)^*$. However, $\xi$ is not necessarily an isomorphism if $V$ is infinite dimensional. |
H: Where's the error in this $2=1$ fake proof?
I'm reading Spivak's Calculus:
2 What's wrong with the following "proof"? Let $x=y$. Then
$$x^2=xy\tag{1}$$
$$x^2-y^2=xy-y^2\tag{2}$$
$$(x+y)(x-y)=y(x-y)\tag{3}$$
$$x+y=y\tag{4}$$
$$2y=y\tag{5}$$
$$2=1\tag{6}$$
I guess the problem is in $(3)$, it seems he tried to divide both sides by $(x-y)$. The operation would be acceptable in an example such as:
$$12x=12\tag{1}$$
$$\frac{12x}{12}=\frac{12}{12}\tag{2}$$
$$x=1\tag{3}$$
I'm lost at what should be causing this, my naive exploration in the nature of both examples came to the following: In the case of $12x=12$, we have an imbalance: We have $x$ in only one side then operations and dividing both sides by $12$ make sense.
Also, In $\color{red}{12}\color{green}{x}=12$ we have a $\color{red}{coefficient}$ and a $\color{green}{variable}$, the nature of those seems to differ from the nature of
$$\color{green}{(x+y)}\color{red}{(x-y)}=y(x-y)$$
It's like: It's okay to do the thing in $12x=12$, but for doing it on $(x+y)(x-y)=y(x-y)$ we need first to simplify $(x+y)(x-y)$ to $x^2-y^2$.
AI: We have $x = y$, so $x - y = 0$.
EDIT: I think I should say more. I'll go through each step:
$x = y \tag{0}$
This is our premise that $x$ and $y$ are equal.
$$x^2=xy\tag{1}$$
Note that $x^2 = xx = xy$ by $(0)$. So completely valid.
$$x^2-y^2=xy-y^2\tag{2}$$
Now we're adding $-y^2$ to both sides of $(1$) so completely valid and we can see that it's another way of expressing $0 = 0$ as $x=y$, but nothing wrong here yet.
$$(x+y)(x-y)=y(x-y)\tag{3}$$
$$x+y=y\tag{4}$$
Step $(3)$ is just basic factoring, and it is around here where things begin to go wrong. For $(4)$ to be a valid consequence of $(3)$, I would need $x - y \neq 0$ as otherwise, we would be dividing by $0$. However, this is in fact what we've done as $x=y$ implies that $x - y =0$. So $(3)-(4)$ is where things go wrong.
$$2y=y\tag{5}$$
$$2=1\tag{6}$$
As a consequence of not being careful, we end up with gibberish.
Hope this clarifies more! |
H: Lipschitz functions are compact or not?
For a constant $\alpha,\lambda$ I want to determine whether $K=\{f:[a,b]\to \mathbb{R}: |f(x)-f(y)|\le \lambda|x-y|^\alpha\}$ in fuctions space with suprimum norm($\|f\|=\sup_{x\in [a,b]}{f(x)}$) is compact or not. I proved that $K$ isn't bounded. Please tell me is the prove true or not. Is it enough to say $K$ is not complete?
Thanks for your helps.
source: Analysis for Applied Mathematics written by Ward Cheney.(Page:350,Problem:4)
AI: It is a standard result that a compact subset of a metric space is bounded.
It is also not hard to prove: Pick any fixed $x$, and write $B(x,r)$ for the open ball of radius $r$ centered at $x$. Then if $K$ is compact, $$K\subseteq\bigcup_r B(x,r),$$ so these balls form an open cover of $K$. Thus a finite number of them cover $K$, and you're done.
Alternatively, if $K$ is unbounded, pick $x_n\in K$ with $d(x,x_n)>n$ and check that then the sequence $(x_n)$ has no convergent subsequence. |
H: A picky question on set theory
I just came to this math statement:
Let A,B,C be sets. Then: (AxB)xC = Ax(BxC)
My question is, why is it so?
I mean,
(AxB)xC = { ((a1,b1),c1), ((a1,b2),c3), ... }
and
Ax(BxC) = { (a1,(b1,c1)), (a2,(b3,c2)), ... }
Is there any "hidden" assumptions on this?
AI: You're correct that they aren't equal. Rather, we have a "natural" bijection $(A\times B)\times C\to A\times(B\times C)$ given by $$\bigl\langle\langle a,b\rangle, c\bigr\rangle\mapsto\bigl\langle a,\langle b,c\rangle\bigr\rangle.$$ This map preserves a lot of the "structure" we may put on products (such as topologies), so in many senses, we may treat them as the same thing, and so many texts simply ignore the distinction. |
H: $1=2$ | Continued fraction fallacy
It's easy to check that for any natural $n$
$$\frac{n+1}{n}=\cfrac{1}{2-\cfrac{n+2}{n+1}}.$$
Now,
$$1=\frac{1}{2-1}=\frac{1}{2-\cfrac{1}{2-1}}=\frac{1}{2-\cfrac{1}{2-\cfrac{1}{2-1}}}=\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{1}{2-1}}}}=\ldots
=\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\dots}}}}},$$
$$2=\cfrac{1}{2-\cfrac{3}{2}}=\cfrac{1}{2-\cfrac{1}{2-\cfrac{4}{3}}}=\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{5}{4}}}}=\cfrac{1}{2-\cfrac{1}{2-\cfrac{1}{2-\frac{1}{2-\frac{6}{5}}}}}=\ldots
=\cfrac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\frac{1}{2-\ldots}}}}}.$$
Since the right hand sides are the same, hence $1=2$.
AI: Another example where dots are misleading: $$1= \frac{ 1 \cdot \color{blue}{2} \cdot \color{green}{3} \cdot \color{red}{4} \cdots}{ 2 \cdot \color{blue}{3} \cdot \color{green}{4} \cdot \color{red}{5} \cdots} \leq \frac{1}{2}$$ |
H: What 's the differece between $\cot(x)$ and $\arctan(x)$?
I know that $\displaystyle \cot(x)=\frac{1}{\tan(x)}$ and $\space \displaystyle \arctan(x)=\tan(x)^{-1}=\frac{1}{\tan(x)}$
What is the difference between these two function?
Is $\cot(x)$ the reciprocal function of $\space \tan(x) \space$ and $\arctan(x)$ is the inverse function of $\tan(x)$?
And, so the assumption that $\space \displaystyle \arctan(x)=\tan(x)^{-1}=\frac{1}{\tan(x)}$, is incorrect?
AI: There is a mistake. And the main problem is the notation. If we have a function $f$ that is 1-1 then we can think of it's inverse function $f^{-1}$. This notation can mislead people to the wrong impression that $f^{-1}(x)=1/f(x)$. That's not true! The function $f^{-1}$ is the function such that $f^{-1}(f(x))=x$ and $f(f^{-1}(y))=y$. For instance, let $f : \Bbb R \to \Bbb R$ be given by: $f(x)=\lambda x$. In that case, $f$ is obviously 1-1 with inverse $f^{-1}(x)=x/\lambda$. Notice that $f^{-1}(x) \neq 1/f(x)$.
In that case we define two things for $\tan$: the reciprocal function $\cot $ that is really defined by $\cot(x) = 1/\tan (x)$ and the inverse function $\arctan$ given by the property I've mentioned above. Take a look on my answer here about the same doubt involving $\sec$, it's the same issue and it may help you.
Good luck. |
H: Characterisation of norm convergence
Let $X$ be a Hilbert space and $(x_n)\in X^{\mathbb N}$ be a sequence. Then the following statements are equivalent (with $x \in X$):
We have $x_n \to x$, i.e. $\| x_n -x \| \to 0$ and
we have $x_n \xrightarrow{\sigma} x$ (weak convergence) as well as $\|x_n\| \to \|x\|$.
Proof: Just expand the expression $\|x_n - x\|^2 = (x_n - x, x_n - x)$.
Q1: This is not true in arbitrary Banach spaces (correct?). Is there a nice example?
Q2: Does the above characterisation still hold for certain spaces which are not (pre-)Hilbertian? Does it still hold for $L^p$ with $p \ne 2$ e.g.?
edit: Apparently, this is referred to as the Radon–Riesz property, Kadets–Klee property or property (H).
AI: for Q2 - in general it holds for uniformly convex spaces |
H: Unicity (or not) of the solution of an integral equation
Given the integral equation:
$$\int_0^a f(x)\left[ \frac{d^2}{dx^2}f(x) \right]dx=a$$ with the condition:
$$\lim_{x\to\infty}f(x)=0$$ how can I find its solution?
Is the solution (if any) the only one possible?
AI: Take any function $g$ defined on $[0,a]$, and write $$\int_0^a g(x)g''(x)\,dx=b.$$
All we require at this point is that $b>0$. Clearly, there is a huge number of such functions to choose from. Now define
$$f(x)=\sqrt{\frac{a}{b}} g(x)\qquad\text{for } x\in[0,a],$$
and expand $f$ to $(a,\infty)$ in whatever way you like, just so long as $$\lim_{x\to\infty} f(x)=0.$$
Again, there is a huge range of possibilities. |
H: Probability question enough data?
So,I have this probability question:
In average, 15 clients visit a store in a hour.What is the probability that :
a) None of the clients buys
b)12 clients buy
c)less than 20 clients buy.
But I dont think I have enough data to answer this :/ I thought about using Poisson distribution...
AI: You do need a bit more information. My instinct is to consider this as a binomial distribution, therefore assuming indepence etc... So what is needed for this approach is the probability that a client purchases something. You could let this be $p$(say) and then calculate the probabilites assuming this value. for example; Let $X$ be the number of clients who make a purchase with probability $p$. Then $X \sim Binomial(15, p)$
a) $P(x=0) = (1-p)^{15}$
b) $P(X =12) = \binom{15}{12} p^{12} (1-p)^{3}$
c) $P(X \le 20) = 1$ Since $20 > 15$.
You could use a Poisson model but then some rate parameter needs to be assumed and a similar method can be used as above using the Poisson distribution instead.
Overall more information is needed in order to explicitly give solutions. |
H: proving existence of a crossing point
Given are two cars, that travel the distance from city A to city B in the same time.
We have to show that there is (at least one) point in time $t_0$, when the two cars have exactly the same speed.
I approached this as follows:
let $f_1(t)$ and $f_2(t) : [0,1] \rightarrow \mathbb{R}$ describe the speed with respect to time of both cars, respectively.
we know that $f_1(0) = f_2(0) = f_1(1) = f_2(1) = 0$ and $f_{1,2} \geq 0$
we have to show $\exists t_0 \in [0,1]: f_1(t_0) = f_2(t_0)$
Which basically means that I have to show that the there's at least one cross point in the plot of the two graphs. How to show this? I was thinking about constructing another pair of helper functions, so that I get this point $t_0$ to the x-axis, which would enable me to apply the zero value theorem, but I'm a bit cluseles when it comes to finding such a pair of functions.
AI: Note that $f_1(t),f_2(t)$ are continuous on $[0,1]$ and $\int_{0}^{1}f_1(t)dt=\int_{0}^{1}f_2(t)dt$. So that $\int_{0}^{1}(f_1-f_2)(t)dt=0$. Use mean value theorem for integrals for the function $f_1-f_2$ and observe that $\exists c \in (0,1)$ such that $(f_1-f_2)(c)=\frac{1}{1-0}\int_{0}^{1}(f_1-f_2)(t)dt=0$ and hence $f_1(c)=f_2(c)$. |
H: Existence of eigenvalues for self-adjoint maps in finite-dimensional inner product spaces
For a finite-dimensional inner product space over $\mathbb{C}$, it is clear that every linear transformation is diagonalisable. In my lecture notes, the lecturer claims that:
For a finite-dimensional inner product space, every self-adjoint transformation has at least one eigenvector.
This fact is then used in a proof of the Spectral Theorem for Hermitian matrices.
Over $\mathbb{R}$ for example, we have - in general - no reason to assume that a linear map has an eigenvalue, so why is it the case that a self-adjoint map must have one?
AI: Note: over a finite-(nonzero)-dimensional complex vector space, every linear transformation has at least one eigenvalue, since the characteristic polynomial splits. It follows by induction that it is upper trianguarizable. But not every linear transformation, even over $\mathbb{C}$, is diagonalizable. Typically, consider the Jordan block
$$
T=\pmatrix{0&1\\0&0}.
$$
Key point: for a self-adjoint transformation, every eigenvalue is actually real. Indeed, if $T^*=T$ and if $Tx=\lambda x$ with $x\neq 0$, then
$$
\lambda (x,x)=(x,\lambda x)=(x,Tx)=(Tx,x)=(\lambda x,x)=\overline{\lambda}(x,x)\quad\Rightarrow\quad \lambda=\overline{\lambda}
$$
where I took an inner-product antilinear in the first variable.
Conclusion: it suffices to prove the result for matrices, up to taking the matrix $A$ of $T$ in an orthonormal basis. Since the characteristic polynomial of $A$ splits over $\mathbb{C}$ and since every root is real, it splits over $\mathbb{R}$. And the characteristic polynomial of $A$ is the same, whether you look at it as an element of $M_n(\mathbb{R})$, or as an element of $M_n(\mathbb{C})$. So the charateristic polynomial of $T$ splits over $\mathbb{R}$. And in general, for a linear transformation $T$ over a finite-dimensional $K$-vector space, every root $\lambda$ in $K$ of the characteristic polynomial in $K[X]$ is an eigenvalue of $T$. And conversely. Since both are equivalent to $T-\lambda \mbox{Id}$ not being invertible. Via the determinant for the root side. And via the rank-nullity theorem on the eigenvalue side.
Another way to reach the conclusion: now take $A$ the matrix of $T$ in an orthonormal basis. Then $T$ is self-adjoint if and only if $A^*=A$ is Hermitian (equal to its transconjugate). Over $\mathbb{R}$, this is just $A^T=A$ (i.e. $A$ symmetric, equal to its transpose). But we can see $A$, even if it is in $M_n(\mathbb{R})$, as an element of $M_n(\mathbb{C})$. So it has at least an
an eigenpair $(\lambda,x)$ in $\mathbb{C}\times \mathbb{C}^n$. But as we have just shown, $\lambda $ must be real. Therefore, writing $x=y+iz$ where $y$ and $z$ are the real and imaginary parts of $x\in\mathbb{C}^n$ in $\mathbb{R}^n$, we get
$$
Ay+iAz=A(y+iz)=Ax=\lambda x=\lambda (y+iz)=\lambda y+i\lambda z\quad \Rightarrow \quad Ay=\lambda y\quad Az=\lambda z.
$$
Si $x\neq 0$, one of $y,z$ must be nonzero, giving a real eigenpair for $A$. Whence for $T$ going back from the matrix to the operator. |
H: Autocovariance of moving average process
Let $\epsilon_t\text{ ~ i.i.d.}(0,1)$, and $X_t=\epsilon_{t}+0.5\epsilon_{t-1}$. I need to find its autocovariance function.
I know that $E(X_t)=0$, $E(\epsilon_{t})=0$. Let's say, that $s=t+1$:
$Cov(X_t,X_s)=E[(\epsilon_t+0.5\epsilon_{t-1})(\epsilon_{t+1}+0.5\epsilon_{t})]=E[\epsilon_{t}\epsilon_{t+1}+0.5\epsilon_{t}^2+0.5\epsilon_{t-1}\epsilon_{t+1}+0.25\epsilon_{t-1}\epsilon_{t}]=0+0.5E(\epsilon_{t}^2)+0.5E(\epsilon_{t-1}\epsilon_{t+1})+0=0.5E(\epsilon_{t}^2)+0.5E(\epsilon_{t-1}\epsilon_{t+1})=\ldots\text{?}$
Can I presume, that
$E(\epsilon_{t}^2)=E(\epsilon_{t}\epsilon_{t})=E(\epsilon_{t})E(\epsilon_{t})=0$ ?
AI: If $\epsilon_t$ is i.i.d $(0,1)$, then $E[\epsilon_t]=0$ and $E[(\epsilon_t-E[\epsilon_t])^2]=E[(\epsilon_t-0)^2]=1$ for any $t$. So the answer to your question is "no".
EDIT. A short add-on. It follows from the formulae above that
$Cov(X_t,X_s)=0.5*1+0.5*Cov(e_t,e_{t+1})$,
where
$Cov(\epsilon_t,\epsilon_{t+1})=E[(\epsilon_t-E[\epsilon_t])(\epsilon_{t+1}-E[\epsilon_{t+1}])]=E[\epsilon_t\epsilon_{t+1}]$. |
H: Questions about the Space of Matrix Coefficients
Apologies in advance for the basic question: In reading up on representation theory, I came across a confusing definition for the $M(\rho)$, the space of matrix coefficients of a representation $(G, \rho, E)$:
Let $\rho_{ij}(s)$ be a matrix representation of $\rho(s)$ in some basis for $E$. Then $
\rho_{ij}$ is a function $G \to \mathbb k$ (the underlying field). Let $M(\rho) = \mbox{span} \{\rho_{ij} : i,j=1,...,\dim E\}$.
My confusion lies in the disappearance of $s$ from above; are we to take the span of the matrix coefficients from $\rho_{ij}(s)$ for all $s \in G$? I assume so, since that's the only thing that really makes sense, but I wanted to check.
In addition, after showing that this definition is basis-independent and showing how $M(\rho)$ can be viewed as a $G \times G$-module, it is stated that if $\rho, \rho'$ are two non-equivalent representations, then $M(\rho)$ and $M(\rho')$ are linearly independent, since they afford two non-equivalent representations of $G \times G$. Is there a good intuition for why this is true?
AI: The disappearance of $s$ is due to the sentence "then $\rho_{ij}$ is a function $G\to\mathbb{k}$". The function $\rho_{ij}$ acts on $G$ by taking an element $s\in G$ to the $(i,j)$-th entry of the matrix $\rho(s)$ in the chosen basis for $E$. The $\rho_{ij}$ are thus elements of the vector space $(\mathbb{k}G)^*$, the $*$ denoting dual, and it makes sense to take their span. |
H: Removing principal part to get analytic function in disk
I'm working on this problem:
Suppose $f$ is analytic on the disk $|z| < 2$ except at $z = 1$, where $f$ has a
simple pole. If
$$f(z) = \sum_{n=0}^{\infty}a_nz^n$$
is the Taylor series expansion for $f$ around $z = 0$, prove that $\lim\limits_{n\to \infty} a_n$
exists and equals the residue of $f$ at $z = 1$.
Here's what I tried:
Let $P(z) = \displaystyle \frac{A}{z-1}$ be the principal part of $f$ near $z = 1$. Then $$g(z) := f(z) - P(z) = f(z) + \displaystyle \frac{A}{1-z}$$ is analytic in $|z| < 2$ when $z \ne 1$ since both $f$ and $P$ are. It's also analytic at $|z| = 1$ since the Laurent series there now has no principal part.
So $f - P$ is in particular analytic at $z = 0$ and so has a Taylor series there
$$g(z) = \sum_{n=0}^{\infty}(a_n + A)z^n$$
Since $g$ is analytic on $|z| < 2$ the Taylor series for $g$ converges on $|z| = 1$. Therefore $\lim\limits_{n\to \infty} (a_n + A) = 0$ and so $\lim\limits_{n\to \infty} a_n = -A$.
So my answer seems to have a different sign than the question suggests it should have, but I don't see the error in my answer.
Thanks for any help.
AI: Your derivation seems correct. The question is incorrectly stated though. Take one of the simplest possible examples, namely
$$ f(z) = \frac{1}{1-z} = \sum_{n=0}^\infty z^n $$
Here $\lim_{n\to\infty} a_n = 1$, but the residue of $f$ at $z=1$ is $-1$, not $1$. |
H: Do spaces satisfying the first axiom of countability have monotone decreasing bases for every point?
I'm facing some problems with this. Every proof I've read assumes this, although it is not obvious to me as of now.
First axiom of countability as defined in my book- for every point $x\in X$, there is a countable family of open sets $\{B_{n}(x)\}$ ($n\in\mathbb{N}$) such that for any open set $G$ containing $x$, there is a $B_{n}\subseteq G$.
Why isn't it possible that $B_{i}$ and $B_{j}$ be intersecting (both contain $x$), but one is not a subset of the other? $i\neq j$ and $i,j\in\mathbb{N}$.
Thank you for your time.
AI: As Abel mentioned, they can not be disjoint sa they have to contain $x$.
Even though, the definition does not require $\{B_n\}_{n\in N}$ to be decreasing. One can choose them to be decreasing by setting:
$$C_i=\cap_{j=1}^iB_j$$
It follows that $\{C_i\}$ is a decreasing sequence of open sets that satisfy:
for any open set $G$ containing $x$, there exists $n$ such that $C_{n}\subseteq G$. |
H: Definition of submatrix
Let $A$ a matrix, I need the definition of sub matrix of $A$. Thanks in advance.
AI: Try our dear friend Wikipedia.
Elaboration, as requested. Let $A\in M_{m,n}$ have $m$ rows and $n$ columns. Set $S\subseteq \{1,2,\ldots, m\}$, $T\subseteq \{1,2,\ldots, n\}$, such that both $S,T$ are nonempty.
We name these elements as $S=\{s_1, s_2,\ldots, s_{|S|}\}, T=\{t_1,t_2,\ldots, t_{|T|}\}$, and assume that $s_1<s_2<\cdots<s_{|S|}$ and $t_1<t_2<\cdots<t_{|T|}$.
Then we define submatrix $B$ via $$(B)_{i,j}=(A)_{s_i,t_j}$$ |
H: How do I know if the linear system has a line of intersection?
I was wondering how can I determine if there is a line of intersection with any matrix?
For example, if I have the following matrix:
$$\left(\begin{array}{rrr|r}
1 & -3 & -2 & -9 \\
2 & -5 & 1 & 3 \\
-3 & 6 & 2 & 8 \\
\end{array} \right)$$
What does the solution have to look like for me to conclude that there is a line of intersection?
P.S. I know this matrix has a point of intersection but I used this as an example because I didn't know an example for a matrix that had a line of intersection.
AI: The existence of a "point of intersection" is the existence of a point $(x, y, z)$ that satisfies the system of equations: a point lying on each line represented by the corresponding system of equations:
$$\begin{align} x - 3y -2z & = -9 \\ 2x - 5y + z & = 3 \\ -3x + 6y + 2z & = 8 \end{align}$$
A unique solution exists (a single point of intersection exists) if the augmented matrix does not reduce to a row of all zeros, and no row has all zeros, augmented by a non-zero entry.
If you obtain a row of all zeros, through row reduction, then infinitely many points of intersection occur (two or more lines will be concurrent): the entries in one or more lines will be a scalar multiple of the entries of another. Put differently, there will be a line of intersection.
If the matrix reduces to a row of three zeros, with a non-zero entry in the last column of that row, no solution exists (i.e., no point of intersection exists.)
All these possibilities can be determined by reducing the matrix to row echelon form.
We can reduce your example matrix:
$$\left(\begin{array}{rrr|r}
1 & -3 & -2 & -9 \\
2 & -5 & 1 & 3 \\
-3 & 6 & 2 & 8 \\
\end{array}\right)$$
$$\left(\begin{array}{rrr|r}
1 & -3 & -2 & -9 \\
0 & 1 & 5 & 21 \\
0 & 0 & 11 & 42 \\
\end{array}\right)$$
If reduced further, you'd see that the system represented by the system of equations has a unique point of intersection, hence no common line of intersection.
Note that if the last row were $(0\;\;2\;\;10\;\;42)$, we could "zero it" by taking $-2R_2 + R_3 \rightarrow R_3$, and obtain a row of all zeros, since the third row would be a scalar multiple of the second row. In that event, there would indeed be a line of intersection. If any row is a linear combination of the other rows, we have a linearly dependent system of equations: this shows when a row-reduced matrix has a row of all zeros. And in that case, there is at least a line of intersection. |
H: First fundamental form and angle between curves
On surface, for which $$ds^2=du^2+dv^2$$
find the angle between lines $v=u$ and $v=-u$.This exercise is related to the first fundamental form. I think I need to find the angle between curves with parametrization: $$u(t)=t, v(t)=t$$ and $$u(s)=s, v(s)=-s$$ therefore $$du=dt, dv=dt$$ and $$du*=ds, dv*=-ds$$
Now I can make tangent vectors to both curves $$dr=\frac{dr}{du} dt + \frac{dr}{dv} dt$$ $$dr*=\frac{dr*}{du} ds - \frac{dr*}{dv} ds$$ Am I thinking right? Unfortunately at this point I don't have more ideas. For finding the angle I can use $$cos(a)=\frac{dr .dr*}{|dr| . |dr*|}$$ And then I could use the numbers "1", "0", "1" from the first fundamental form. But I'm absolutely unsure, so I would like to ask you about it. Thank you.
AI: You can parametrize your curves as $c_1(t)=(t,t)$ and $c_2(t)=(t,-t)$ (first coordinate: $u$). You are searching for the angle between the tangent vectors $v=(1,1)$ and $w=(1,-1)$ (tangent vectors along $c_1$ and $c_2$: note that they do not depend on $t$) using the metric $g=du^2+dv^2$ with components $g_{ij}(u,v)=\delta_{ij}$ for $i,j=1,2$.
Then the angle $\theta$ is s.t. $\cos\theta=\frac{\langle v,w\rangle}{\|v\| \|w\|}$,
with $\langle v,w\rangle=g_{ij}v_iw_j$ and $\|v\|=\sqrt{g_{ij}v_iv_j}$ (and similarly for $\|w\|$). We sum over repeated indices. |
H: recurrence criterion for random-walk like Markov chain
Suppose we have a random-walk like Markov chain, i.e. state space is the set of all integers from $-\infty$ to $\infty$, the transition probability $P_{ij}$ is nonzero only when $j=i+1$ or $j=i-1$.
The difference from the random walk is that the transition probability is periodic in state space i.e. $P_{i,i+1}=p_1,P_{i+1,i+2}=p_2,\dots,P_{i+L-1,i+L}=p_L,P_{i+L,i+L+1}=p_1$, and none of $p_i$ is 0 or 1.
My question is: under what conditions (in terms of $\{p_i\}$) is this Markov chain recurrent or transient?
AI: For every $x$, let $T_x$ denote the first hitting time of $x$. Looking at the chain at its successive visits of multiples of $L$, one sees that there is recurrence if and only if $h_L=\frac12$, where, for $0\leqslant x\leqslant 2L$,
$$
h_x=P_x(T_{2L}\lt T_0).
$$
The computation of $h_L$ involves the finite Markov chain on $\{0,1,\ldots,2L\}$ hence the standard approach applies, which is as follows.
Let $r_x=r_{x+L}=p_{x+1}$ for $0\leqslant x\leqslant L-1$. By the Markov property, the vector $(h_x)_{0\leqslant x\leqslant 2L}$ solves the linear system
$$
h_{2L}=1,\quad h_0=0,\quad h_x=r_xh_{x+1}+(1-r_x)h_{x-1}\quad (1\leqslant x\leqslant 2L-1).
$$
The change of variable $h_x=\sum\limits_{y=1}^xk_y$ yields $r_xk_{x+1}=(1-r_x)k_x$ for every $1\leqslant x\leqslant 2L-1$, which is easily solved. The normalization $h_{2L}=1$ yields finally
$$
h_x=\frac{K_x}{K_{2L}},\qquad
K_x=\sum\limits_{y=1}^xR_{y-1},\qquad R_{y}=\prod_{z=1}^{y}\frac{1-r_z}{r_z}.
$$
Up to this point, this is entirely general and applies to every random walk on the integers with transitions $(r_x)$. We now turn to the specifics of the case at hand.
Due to the periodicity of $(r_x)$, for every $1\leqslant y\leqslant L$, $R_{y+L}=R_LR_y$ hence
$$
K_{2L}-K_L=\sum\limits_{y=L+1}^{2L}R_{y-1}=\sum\limits_{y=1}^{L}R_{y+L}=R_L\sum\limits_{y=1}^{L}R_{y}=R_LK_L,
$$
which yields the simple identity
$$
h_L=\frac1{1+R_L},\qquad
R_L=\prod_{x=1}^L\frac{1-p_x}{p_x}.
$$
This shows that:
if $R_L=1$, then the chain is null recurrent,
if $R_L\lt1$, then $h_L\gt\frac12$ hence the chain is transient (and goes to $+\infty$ at linear speed),
if $R_L\gt1$, then $h_L\lt\frac12$ hence the chain is transient (and goes to $-\infty$ at linear speed). |
H: Find the values of the constants in the following identities $A(x^2-1)+B(x-1)+c = (3x-1)(x+1)$
I'm stuck on a basic question regarding identities.
$A(x^2-1)+B(x-1)+C = (3x-1)(x+1)$
I've managed to substitute $x$ for $1$ to work out C is $4$. However, I'm unsure how to work out A and B respectively.
AI: For $x=-1$ we have, $-2B+C=0\Rightarrow 2B=C=4\Rightarrow B=2$
for $x=0$ we have , $-A-B+C=-1 \Rightarrow A=1-B+C\Rightarrow A=3$ |
H: Littlewood's isoperimetrical problem
Please consider the following self-contained excerpt from Chapter $1$ of Littlewood's A MATHEMATICIAN'S MISCELLANY. I have two questions:
1) How is the second (weak) inequality derived?
2) How does the result follow from the two (weak) inequalities?
AI: I don't think the part starting with "Suppose..." is related to this proof, but rather Littlewood is going to the next topic.
The integral formula for the area, plus the inequality $OP^2+OQ^2\leq 1$ is enough to deduce that the area is at most $\frac{\pi}{4}$.
I don't know why the inequality is true, but if $x>0$ and $a_n=xn$ then $$\frac{1+a_{n+1}}{a_n} = 1+ \frac{1+x}{x}\frac{1}{n}$$
So $$\lim_{n\to\infty}\left(\frac{1+a_{n+1}}{a_n}\right)^n = e^{\frac{1+x}{x}}$$
So the $\limsup$ above can be made arbitrarily close to $e$, so $e$ is the best lower bound. |
H: On projective dimension of quotients of polynomial rings
Let $A$ be a commutative ring, $B=A[X]/(X^2)$, and $C=B/(x)$. (Here $x$ denotes the residue class of $X$ modulo $(X^2)$.) Why the projective dimension of $C$ is infinite ?
AI: I'm assuming you ment projective dimension as a $B$-module.
Have you tried finding a projective resolution? It's not that hard. Here's how: First off, you have an obvious surjection $B \to C \to 0$ sending the class of $X$ to zero. What is the kernel? Well, it's the ideal generated by $X$. So we have a minimal projective resolution of the form
$$
\cdots \to B \xrightarrow{\cdot x} B \xrightarrow {\cdot x} B \xrightarrow{x \mapsto 0} C \to 0$$
It is minimal because each map lands in the maximal ideal.
You must prove that it is minimal, however. |
H: how to know of the number of real roots?
Let $ax^4 +bx^3 +cx^3 +dx + e = 0$ with $a,b,c,d,e\in\mathbb R$. I would like to know, how can I determine the condition for the polynomial to have exactly three distinct real solutions.
one has to be a double root, or is there any other possibility.
I need help please.
Thanks
AI: Depending what you mean, if the equation has a double root eg $x^2(x+1)(x-1)=0$ it can have three distinct real roots, but we've counted one twice.
So there are two conditions:
(i) One double root
(ii) The other two roots are real and distinct |
H: Probability that the first digit of $2^{n}$ is 1
Let $a_{n}$ be the number of terms in the sequence $2^{1},2^{2},\cdots ,2^{n}$ which begins with digit 1.
Prove that $$\log2 -\frac{1}{n}<\frac{a_{n}}{n}<\log2\text{ (log base is 10)}$$
Note: This is only a part of the question.The actual question is:Prove that the probability that randomly chosen power of 2 begins with 1 is $\log2$.
The rest is quite easy(once I've proven the above inequality).Could anyone give me any hint for solving this question?Thanks!
AI: Hint 1: Show that there is always a power of 2 that has $k$ digits and starts with 1. (For $k=1$, I'm including $2^0=1$.) Use Shreevastsa's hint about logarithms.
Hint 2: Show that there is at most 1 power of 2 that has $k$ digits and starts with 1.
Hence, there are exactly $k$ powers of 2 from 1 to $2 \times 10^k$ that start with 1.
For any $n$, $2^n$ has $ \lfloor n \log 2 \rfloor + 1$ digits, hence $a_n = \lfloor n \log 2 \rfloor$, so $a_n < n \log 2$, which gives us the RHS.
For the LHS, show that $\lceil n \log 2 \rceil - 1 = a_n $. |
H: Closure and pathwise connected
On $\mathbb R^2$ I consider the following set $G:=\{(x,\sin(\frac{1}{x}))| x\in (0,1]\}$
Well this set is pathwise connected (as a graph of a contionous function on an interval). The function is also oscillating on the left hand side so it makes sense to write the closure as $\bar{G}=G\cup(\{0\}\times[-1,1])$ but how can this be proved formally?
The next thing which is not clear to me is why $\bar{G}$ is connected but not pathwise connected, so there is no contin. way $c:[0,1]\rightarrow \bar{G}$ with starting point $c(0)=x$ and $c(1)=y$
AI: Take a point $(0,y)$ for $0\leq y\leq 1$, if $x=\arcsin(y)$ then we know that $\sin(x+2\pi n)=y$ for every $n\in\mathbb{N}$ so then $(\frac{1}{x+2\pi n},\sin(x+2\pi n))$ is a sequence (in $n$) of points in $G$ which converge to $(0,y)$ hence $(0,y)$ is in the closure of $G$. Then as the set $G\cup \{(0,y)\,:\, y\in [0.1]\}$ is a closed set containing all limit points, it is indeed the closure.
You can prove directly that if $S$ is any connected subset of $\mathbb{R}^n$ that for any $T$ satisfying $S\subseteq T\subseteq \overline{S}$ is connected. Indeed I am pretty sure this holds in general. (For instance let $f:T\rightarrow \{0,1\}$ be a continuous function, as $S\subseteq T$ is a subset of a connected component then $f$ is constant on $S$. As every point in $T\setminus S$ is a limit point of $S$ there is a sequence of points in $S$ converging to $T$. Apply $f$ to this sequence and continuity gives that $f$ maps the points in $T\setminus S$ to the same value as it does for $S$. Hence $f$ is constant)
For the failure of path connectedness notice that if you want to find a path $c$ from say $(2\pi,0)$ to $(0,1)$ then by continuity then every sequence of points $\{a_n\}_{n=1}^\infty$ in $[0,1]$ converging to $1$ has to satisfy $c(a_n)\rightarrow c(1)=(0,1)$ however if you take the troughs of the $(x,\sin(\frac{1}{x}))$ you can show this is not the case. To be more formal let $U$ be an open subset of $\overline{G}$ not containing any points of the $x$ axis. By continuity of $c$ we know that the image of $c$ is eventually contained in $U$ i.e. there exists $0<t<1$ so that $c([t,1])\subseteq U$ however it should be clear that $c$ has to go up the peaks and down the troughs of $G$ and so can't remain in $U$. |
H: first order ordinary differential equation
How can I solve this ODE of first order:
\begin{align} y^{'}= y^{2}+x, & \text{where } y^{'}=\frac{dy}{dx} \end{align}
Is there any exact mehod to solve it ?
Thank you.
AI: Make the change of function $\displaystyle y=-\frac{v'}{v}$. This transforms your differential equation into linear equation
$$v''+xv=0$$
which is solvable in terms of Airy functions (it is essentially Airy equation with $x\leftrightarrow -x$):
$$v(x)=C_1\mathrm{Ai}(-x)+C_2\mathrm{Bi}(-x).$$ |
H: A formula to find the organs's value from $1$ to $100$.
We have a variable named NUMBER
This variable, can hold ANY number from $1$ to $100$.
Let's mark that number as $X$.
We know that:
$x(1) = 1000$ Coins
$x(100) = 250$ Coins
I need to write down a formula that will automatically find the coins amount of the variable $x$ (NUMBER).
Any ideas?
AI: There are many choices: One approach is $$\begin {cases} x(NUMBER)=1000 & x \neq 100 \\x(NUMBER)=250 & x=100 \end {cases}$$ which is probably not what you are thinking. You need to specify better what you want. Probably what you are thinking is $$x(NUMBER)=1000-\frac {750}{99}(NUMBER-1)$$ which is a straight line |
H: Sum of products of elements in matrix form.
Suppose I have two matrices $\textbf{A}$, and $\textbf{B}$ as follows:
$\begin{array}{c=c}
\textbf{A}
=
\left[ \begin{array}{ccc}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}\\
\end{array}\right]
\end{array}$
, and
$\begin{array}{c=c}
\textbf{B}
=
\left[ \begin{array}{ccc}
b_{11} & b_{12} & b_{13}\\
b_{21} & b_{22} & b_{23}\\
b_{31} & b_{32} & b_{33}\\
\end{array}\right]
\end{array}$
I am looking for a way to write, in matrix language, this operation:
$(a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31})+(a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32})+(a_{31}b_{13}+a_{32}b_{23}+a_{33}b_{33})=k$
The matrices can be rectangular in such a way that $\textbf{A}$ can of mxn dimension and $\textbf{B}$ can be of nxm dimension. If this is possible, I would be grateful for a clue.
Thanks.
AI: if $C = A B$ is the matrix product, then the elements are $c_{i,j} = \sum_k a_{i,k} b_{k,j}$. Then the trace of $C$ will be
$Tr(C) = \sum_k c_{k,k} = \sum_k \Big( \sum_l a_{k,l}b_{l,k}\Big)$
which seems to be what you are looking for. |
H: Least Squares Plane using Matricies
For a Least Squares solution to a 2D set of coordinates, the formula is:
$X^T\,X\,\vec b = X^Ty$ (where $X^T$ denotes $X$ transpose)
(for: $y = B_0 + B_1x + B_2x^2$)
where:
$X$: $b$: $y$:
[$1$ $x_1$] [$B_0$] [$y_1$]
[$1$ $x_2$] [$B_1$] [$y_2$]
[$1$ $x_3$] [$B_2$] [$y_3$]
[... ...] [... ]
My question is:
What do I need to change in order for me to use least squares to solve for a 3d surface?
Also, what would my equation look like? $y = B_0 + B_1x_1 + B_2x_2 + B_3{x_1}^2 + B_4{x_2}^2$ ?
Thanks a million.
AI: So to clarify, you're looking for the best approximation of the form $z=f(x,y)=a_xx+a_yy+a_0$ in $xyz$-space for a set of $x,y,z$ data points. In order to do that, we begin by supposing that there is an exact solution. If that were the case, our system of equations would look like this:
$$
a_0+a_xx_1+a_yy_1=z_1\\
a_0+a_xx_2+a_yy_2=z_2\\
\vdots
$$
and so forth. We can rewrite this in matrix-form as follows:
$$
\left[
\begin{array}{ccc}
1 & x_1 & y_1 \\
1 & x_2 & y_2 \\
\, & \vdots & \, \\
1 & x_n & y_n
\end{array}
\right]
\,\left[
\begin{array}{c}
a_0\\
a_x\\
a_y
\end{array}
\right]=
\left[
\begin{array}{c}
z_1\\
z_2\\
\vdots\\
z_n
\end{array}
\right]
$$
From there, getting the least squares solution simply a matter of applying the same logic as before. Setting
$$
A=
\left[
\begin{array}{ccc}
1 & x_1 & y_1 \\
1 & x_2 & y_2 \\
\, & \vdots & \, \\
1 & x_n & y_n
\end{array}
\right];
\vec x=
\left[
\begin{array}{c}
a_0\\
a_x\\
a_y
\end{array}
\right];
\vec y =
\left[
\begin{array}{c}
z_1\\
z_2\\
\vdots\\
z_n
\end{array}
\right]
$$
we have
$$
A^TA \vec x = A^T\vec y
$$
And must solve for $\vec x$. |
H: Simple Combination of cards
I'm a bit ashamed to ask such a simple question, but my math skills are a bit rusty to say the least. Here's the big deal:
I have 10 cards, 5 black and 5 white. How many combination can I make with those cards while using all of them?
-Obviously permutating 2 same-color cards won't create a new combination.
I'd like to know the formula that I shall use in this situation.
Thanks
Fabien
AI: You just have to pick $5$ spots out of the $10$ for the white cards. So you have ${10 \choose 5}=252$ |
H: Some doubts about predicate calculus
I am studying the predicate calculus in First Order Logic and I have some doubt about this argument. In my book I find that a formula in the predicate calculus is built from
Literals constructed from
Predicate Symbols: $P = \{p,q\}$
Constants: $A = \{a,b\}$
Variables: $X = \{x,y\}$
Logical Connectives: (and, or, not, implication, etc)
Quantifiers: (univeral and existential)
To build a syntactically correct formula I can use a BNF grammar.
So for example this one is a valid predicative forumla:
$$ \forall x.\exists y.(p(x,y) \to p(y,x)) $$
My question is: “what exactly is p?“
I think that $p$ is a predicate that represents a 2-ary relation. For example $p$ could be something like the relation that represent whether a number is the square root of another number. So for example:
\begin{gather}
SQ(2,1) = F \\
SQ(2,2) = F \\
SQ(2,3) = F \\
SQ(2,4) = T
\end{gather}
So if $p$ represents a 2-ary relation it is a subset of the $2\times 2$ cartesian product (in the previous case it is only the point that the first number is the square root of the second number).
So $p$ is a predicate that represents a $n$-ary relation $R$ on a domain $D$ and $R$ is a subset of $D^n$.
So, the predicate $p$ related to $R$ is true if:
$$R(d_1,\dots,d_n) = T \text{ iff } \langle d_1,\dots,d_n\rangle \in R $$
So, coming back to my previous predicative formula, can I say that it is built from some predicative symbol that uses variables and constants connected together by logical connectives, and in which I can “select” some or all variables by the universal and existential quantifiers?
AI: The predicate symbol $p$ is part of the language. It has no meaning. It can only be part of a well formed formula. The interpretation of that language is a structure. It is in that structure that $p$ is interpreted as an $n$-ary relation on a given universe.
Let's use your square root predicate as an example. Suppose the symbol is 'SQ' and it is given in our language as a binary predicate symbol. Since it is a binary predicate $SQ(t_1,t_2)$ is a formula for any two terms $t_1$ and $t_2$ of our language. That is really all we can say about 'SQ' without a structure.
Now, if we have a structure that interprets our language, it must give meaning to a formula like $SQ(1,2)$, where $1$ and $2$ are also terms in our language. In your example, you interpreted this as meaning "the square root of $2$ is $1$". This is probably going to be evaluated as false in the structure you had in mind. |
H: correct expansion of a sum using multiple indexes
I have looked for a similar posting but haven't found anything... but then I am also a bit unsure of how to search because I've never posted a math question before. In my introductory finite element method course the prof was introducing multiple index notation and I don't understand the way that a sum has been expanded while using this notation. The example given is:
\begin{equation}
\sum_{|\alpha| \le\ 2}a_\alpha x^\alpha, x \in\ \mathbb{R}^2
\end{equation}
Which I understand expands to:
\begin{equation}
= \sum_{|\alpha| =\ 0}a_\alpha x^\alpha + \sum_{|\alpha| = 1}a_\alpha x^\alpha + \sum_{|\alpha| = 2}a_\alpha x^\alpha
\end{equation}
And here where I am lost :
\begin{equation}
= a_{00}x_1^0x_2^0 + a_{10}x_1^1x_2^0 + a_{01}x_1^0x_2^1 + a_{11}x_1^1x_2^1 + a_{20}x_1^2x_2^0 + a_{02}x_1^0x_2^2
\end{equation}
Which of these termes correspond to which sum?
Why does "a" have a double index?
If $|\alpha| = 2 = \alpha_1 + \alpha_2$, what does $|\alpha|=0$ mean? 0? Empty?
How do I know the values of $\alpha_1$ and $\alpha_2$?
Please note that it is entirely possible that I have copied some of this out wrong, the prof has nearly indecipherable handwriting. If someone can point me in the right direction it would be greatly appreciated.
AI: It sounds like the double index under $a_{ij}$ exactly correponds to the powers $x_1^i x_2^j$. The sum in question assumes $\alpha$ has 2-digits, and $|\alpha| = 1$ means the digits sum to $1$.
For simplicity write $\alpha = (i,j), a_\alpha = a_{ij}, x^\alpha = x_1^i x_2^j$ and $|\alpha| = i+j$.
So you are right, the first step expands $|\alpha| \leq 2$ into three cases of $0,1,2$. The second step expands $|\alpha| = i+j = 0$ into the only possible case $(i,j) = (0,0)$.
The second one has them sum to one: $(i,j) \in \{(0,1), (1,0)\}$.
The third one has them sum to two: $(i,j) \in \{(0,2), (1,1), (2,0)\}$. |
H: How does one solve this Sum?
How does one compute,for integer $a,b\ge0$,$$\sum_{i=0}^b (-1)^{(b-i)} \dfrac{1}{a+b-i}\dbinom{b}{i}$$
AI: $\displaystyle(1-x)^{b}=\sum_{i=0}^{i=b}\dbinom{b}{i}(1)^{i}(-x)^{b-i}=\sum_{i=0}^{i=b}\dbinom{b}{i}(-1)^{b-i}x^{b-i}$
Multiplying by $x^{a-1}$ we have,
$\displaystyle x^{a-1}(1-x)^{b}=\sum_{i=0}^{i=b}\dbinom{b}{i}(-1)^{b-i}x^{(a+b-1)-i}$
integrating both sides from $0$ to $1$ we have,
$\displaystyle \int_{0}^{1}x^{a-1}(1-x)^{b}=\sum_{i=0}^b (-1)^{(b-i)} \dfrac{1}{a+b-i}\dbinom{b}{i}$
We know that ,
$\displaystyle\int_{0}^{1}x^{a-1}(1-x)^{b}=\frac{\Gamma (a)\Gamma (b+1)}{\Gamma (a+b+1)}=\frac{(a-1)!b!}{(a+b)!}$ |
H: Measurability from Inner Measure
Let $E$ have finite outer measure. Show that there is a $G_\delta$ set $G$ which contains $E$ and has the same outer measure. Then show $E$ is measurable if and only if $E$ contains an $F_\sigma$ set $F$ of the same outer measure.
$\textit{Proof:}$ Choose and open set $\mathcal{O}_k$ such that $m^*(\mathcal{O}_k$ \ $E) < \frac{1}{k}$. Define $G = \cap_{k=1}^{\infty} \mathcal{O}_k$. Then $G$ is a $G_\delta$ that contains $E$ and by monotonicity $m^* (G$ \ $E) \leq m^*(\mathcal{O}_k$ \ $E) < \frac{1}{k}$ so $m^* (G$ \ $E) = 0$ and we've shown the first part.
Now a set $E$ is measurable provided for any set $A$ we have that
$$ m^*(A) = m^* (A \cap E) + m^*(A \cap E^C)$$
What is the next step?
Edit: The textbook says the fact that (iii) For each $\epsilon > 0$, there is a closed set $F$ contained in $E$ for which $m^* (E \setminus F) < \epsilon$. Can someone explain which parts of Royden's proof need to be altered to show this? He says it follows from DeMorgan's identities.
AI: If you are trying to show that the complement of a measurable set is measurable, here's what I would do. You know that for all $K \in \Bbb{N}$ there is an open set $O_n$ such that $m_\ast (\mathcal{O}_K - E) \leq \frac{1}{k}$. Now consider the union
$$\mathcal{O} = \bigcup_{k=1}^\infty \mathcal{O}_k^c.$$
Each $\mathcal{O}_k^c$ is closed and thus measurable, so that $\mathcal{O}$ is a measurable subset contained in $E^c$. Now write
$$E^c = \mathcal{O} \cup( E^c \setminus \mathcal{O}).$$
Since $\mathcal{O}$ is measurable, can you show that $E^c \setminus \mathcal{O}$ is measurable? Hint: show that it is contained in a set of measure zero and thus measurable. |
H: how to determine the following set is countable or not?
How to determine whether or not these two sets are countable?
The set A of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$.
The set B of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$ that are eventually 1.
First one is easier to determine since set of fuctions from $\mathbb{N}$ to $\{0,1\}$ is uncountable. But how to determine the second one? Thanks in advance!
AI: Let $B_n$ be the set of functoins $f\colon \mathbb Z_+\to\mathbb Z_+$ with $f(x)\le n$ for all $x$ and $f(x)=1$ for $x>n$.
Then $$B=\bigcup_{n\in\mathbb N}B_n$$
and $|B_n|=n^n$. |
H: Find the values of the constants in the following identitity $x^4+4/x^4 = (x^2-A/X^2)^2+B$
A step by step solution would be preferred for the following question :
Find the values of the constants in the following identitity $x^4+4/x^4 = (x^2-A/X^2)^2+B$.
so far I managed to substitute $x$ for $1$ to get :
$x^4+4/x^4=x^4-2A+A/x^2+B$
However I'm not sure how to proceed further.
AI: Let's find the coefficients without using the Heaviside cover-up method (that is, without substituting values for $x$). First, we convert to polynomials by squaring and clearing fractions:
$$ \begin{align*}
x^4+\dfrac{4}{x^4} &= \left(x^2-\dfrac{A}{x^2}\right)^2+B \\
x^4+\dfrac{4}{x^4} &= \left(x^4-2A+\dfrac{A^2}{x^4}\right)+B \\
\dfrac{4}{x^4} &= -2A+B+\dfrac{A^2}{x^4} \\
4 &= (-2A+B)x^4+A^2 \\
\end{align*} $$
Next, we compare coefficients. That is, we equate the coefficients of each power of $x$ for the polynomials for each side. In this case, we need only compare the coefficients of $x^4$ and $x^0$ (the constants):
$$ \begin{align*}
x^0&: \quad 4=A^2 \implies A=\pm2\\
x^4&: \quad 0=-2A+B \implies B=2A=2(\pm2)=\pm4\\
\end{align*} $$
So there are two possibilities. Either $\boxed{A=2,B=4}$ or $\boxed{A=-2,B=-4}$. |
H: Why is $k \rightarrow A \rightarrow A / I$ and isomorphism of rings if $I \subset A$ is maximal?
Let $k$ be a algebraically closed field, $A$ a finitely generated $k$-Algebra and $I \subset A$ a maximal ideal. Let $\varphi: k \rightarrow A$ be a ring homomorphism.
Why is this combination
$$k \rightarrow A \rightarrow A / I$$
an isomorphism?
AI: The general case is not going to be true and here is why: Take $k = \Bbb{C}$, $A = \big(\Bbb{C}(x)\big)[y]$ and $I = (y)$ that is maximal since $\big(\Bbb{C}(x)\big)[y]/(y) \cong \Bbb{C}(x)$ is a field. But now $ \Bbb{C}(x)$ is much bigger than $\Bbb{C}$ (in fact infinite dimensional over $\Bbb{C}$) and so your compositum cannot possibly be an isomorphism.
Your result though is true in the case when $A$ is a finitely generated $k$ - algebra. If $I$ is a maximal ideal then $A/I$ is a finitely generated $k$ - algebra that is a field, and hence by Zariski's lemma is a finite algebraic extension of $k$. Since $k$ is algebraically closed, $A/I = k$ and so your compositum can be viewed as an injective linear map between two $1$ - dimensional vector spaces and thus is an isomorphism. |
H: How to find the integral of $\frac{1}{2}\int^\pi_0\sin^6\alpha \,d\alpha$
$$\frac{1}{2}\int^\pi_0\sin^6\alpha \,d\alpha$$
What is the method to find an integral like this?
AI: Perhaps the easiest, though it requires some knowledge of the complex exponential function, is to substitute $$\sin x=\frac{e^{ix}-e^{-ix}}{2i}$$ in the integral and expand. |
H: Use the residue theorem to evaluate
$$
\int _ {|z|=2} \frac { dz} {(z-4)(z^3-1)}
$$
What I've done now is the following.
$f$ has isolated singularities at $z=4$, $1$, $\exp(\pi i/3)$, $\exp(-\pi i / 3)$
$$
\int _ {|z|=2} \frac { dz} {(z-4)(z^3-1)} = 2 \pi i (\operatorname{Res}(f;1) + \operatorname{Res}(f;\exp(\pi i / 3) + \operatorname{Res}(f;\exp(-\pi i / 3) )
$$
Does it make sense?
And I have lost my way..
In my calculation, Res(f;1) = $\frac 1 {-9}$. But I don't calculate the following 2 values.
AI: The poles are at $z=e^{\pm i 2 \pi/3}$; compute the residues from there.
$$\text{Res}_{z=e^{i 2\pi/3}} \frac{1}{(z-4)(z^3-1)} = \frac{1}{(e^{i 2 \pi/3}-4) 3 e^{i 4 \pi/3}}$$
$$\text{Res}_{z=e^{-i 2\pi/3}} \frac{1}{(z-4)(z^3-1)} = \frac{1}{(e^{-i 2 \pi/3}-4) 3 e^{-i 4 \pi/3}}$$ |
H: proving $\frac{1}{1\cdot 2}+\frac{1}{3\cdot 4}+\frac{1}{5\cdot 6}+\cdots+$ Without Induction
i proved that:
$$
\begin{align}
& {} \quad \frac{1}{1\cdot 2}+\frac{1}{3\cdot 4}+\frac{1}{5\cdot 6}+\cdots+\frac{1}{(2n-1)\cdot 2n} \\[10pt]
& =\frac{1}{n+1}+\frac{1}{n+2}+\frac{1}{n+1}+\cdots+\frac{1}{2n}\text{ for }n\in \mathbb{N}
\end{align}
$$
by induction.
i wonder if it can be done without using induction. if so, i'll appreciate if someone could show how.
thanks.
AI: $$\text{As }\frac1{(2r-1)2r}=\frac{2r-(2r-1)}{(2r-1)2r}=\frac1{2r-1}-\frac1{2r},$$
$$\text{we can write }\frac{1}{1\cdot 2}+\frac{1}{3\cdot 4}+\frac{1}{5\cdot 6}+...+\frac{1}{(2n-1)\cdot 2n}$$
$$=\left(\frac11-\frac12\right)+\left(\frac13-\frac14\right)+\cdots+\left(\frac1{2n-1}-\frac1{2n}\right)$$
$$=\left(\frac11+\frac12-2\cdot\frac12\right)+\left(\frac13+\frac14-2\cdot\frac14\right)+\cdots+\left(\frac1{2n-1}+\frac1{2n}-2\cdot\frac1{2n}\right)$$
$$=\frac11+\frac12+\frac13+\frac14+\cdots+\frac1{n-1}+\frac1n+\frac1{n+1}+\cdots+\frac1{2n-1}+\frac1{2n}-2\left(\frac12+\frac14+\frac16+\cdots+\frac1{2n}\right)$$
$$=\frac11+\frac12+\frac13+\frac14+\cdots+\frac1{n-1}+\frac1n+\frac1{n+1}+\cdots+\frac1{2n-1}+\frac1{2n}-\left(1+\frac12+\frac13+\cdots+\frac1n\right)$$
$$=\frac1{n+1}+\frac1{n+2}+\cdots+\frac1{2n-1}+\frac1{2n}$$ |
H: $\mathbb F_9 = \mathbb F_3(i)$ Question about squares
Is it true that $\forall a,b \in \mathbb F_9$: $a \cdot b$ is a square $\iff$ $a \cdot \overline b$ is a square ?
AI: Since your group of units is cyclic, you can argue very similarly to the case of $F_p$, and the "Legendre symbol" analogue here is still a homomorphism.
In particular, multiplying by $a$ doesn't matter. You can reduce to $$b \; \mathrm{square} \Leftrightarrow \overline{b} \; \mathrm{square}.$$ This is true, because for nonzero $b$ $$b \; \mathrm{square} \Leftrightarrow b^4 = 1 \Leftrightarrow (\overline{b})^4 = \overline{b^4} = 1 \Leftrightarrow \overline{b} \; \mathrm{square}.$$ |
H: Is there any "superlogarithm" or something to solve $x^x$?
Is there any "superlogarithm" or something to solve an equation like this:
$$x^x = 10?$$
AI: Yes - it's called the Lambert W Function. Scroll down and take a look at Example 2. |
H: Contradiction to Continuity of Integration?
For each of the two functions $f$ on $[1,\infty)$ defined below, show that $\lim_{n \rightarrow \infty} \int_1^n f$ exists while $f$ is not integrable over $[1,\infty)$. Does this contradict the continuity theory of integration?
(i) Define $f(x) = (-1)^{-n} / n$, for $n \leq x < n +1 $.
(ii) Define $f(x) = (\sin x )/x$ for $1 \leq x < \infty $.
This is Royden 4e pg 91. I think the idea is that with the Lebesgue integral we are calculating the positive and negative parts separately and then combining, and in this case their improper Riemann integral would cancel, but in this theory it does not work. How do I go about showing this. Do I investigate $\mathbb{R}\setminus(\infty, 1)$?
AI: These functions are not Lebesgue-integrable since their absolute value isn't (definition of lebesgue-integrability for real-valued functions).
For the first function this is easy ($\int |f(x)|dx = \sum_{n} \frac{1}{n} = \infty$).
For the second function we have $|f(n\pi + x)| > \frac{1}{\pi}\frac{\sin(x)}{n+1}$ for $n>1$ and $x \in [0,\pi[$ so we have (roughly) $\int |f(x)|dx > \frac{1}{\pi} \times \int_{0}^{\pi} \sin(x)dx \times \sum_{n} \frac{1}{n} = \infty$.
However, you can integrate over $[1,n]$ and take the limit, for the first you get $\sum_{n} \frac{(-1)^{n}}{n} = \ln(2)$ and for the second you can integrate by parts.
The continuity of integration is not contradicted since it assumes the function to be integrable, which is not here. |
H: Solving Recurrent Relation
$a_{1}=\dfrac{3}{5}$ , $~$ $a_{n+1}=\sqrt{\dfrac{2a_{n}}{1+a_{n}}}$ $~$ $(n\geq 1)$
Find the closed form of $a_{n}$
AI: Let $b_n = 1/a_n$. Then
$$b_{n+1}^2 = \frac{1+b_n}{2}$$
Sorry if this is kind of a deux ex machina, but you might be able to recognize that the above recurrence fits the pattern of the cosine half-angle formula. That said, the initial condition won't allow for a cosine solution...but it will allow for a hyperbolic cosine solution, which satisfies the same half-angle identity. Then you can write that
$$b_n = \cosh{\left ( \frac{\theta}{2^n}\right)}$$
and
$$a_n = \text{sech}{\left ( \frac{\theta}{2^n}\right)}$$
$$a_1 = \frac{3}{5} \implies \theta = 2 \,\text{arcsech}{\left ( \frac{3}{5}\right)} = 2 \log{3}$$
Therefore
$$a_n = \text{sech}{\left ( \frac{\log{3}}{2^{n-1}}\right)} = \frac{2}{\displaystyle 3^{1/2^{n-1}} + 3^{-1/2^{n-1}}}$$ |
H: Why is this J(all the linear combinations of two polynoimials in F[x]) an ideal of polynomial field F[x]?
Suppose $a(x)$ and $b(x)$ are two non-zero polynomials in the polynomial field $F[x]$ have a gcd $d(x)$ can be expressed as a "linear combination":
$$
d(x) = r(x)a(x) + s(x)b(x)
$$
where $r(x)$ and $s(x)$ are in $F[x]$
Now if $J$ is the set of all the linear combinations:
$$
u(x)a(x) + v(x)b(x)
$$
as $u(x)$ and $v(x)$ range over $F[x]$ then $J$ is an ideal of $F[x]$.
I don't understand this last statement "$J$ is an ideal of $F[x]$"
According to the definition of ideal:
"A nonempty subset $B$ of a ring $A$ is called an ideal of $A$ if $B$ is closed with respect to addition and negatives and $B$ absorbs products in $A$."
Now for the above problem how does this definition of ideal reasserts the idea that $J$ is an ideal?
Can anyone kindly help me find the answer?
AI: Your definition of ideal is correct, so now we need to show that $J$ satisfies the definition. So we ask the following questions:
Is $J$ closed under addition? That is, if $r_1(x)a(x)+s_1(x)b(x)$ and $r_2(x)a(x)+s_2(x)b(x)$ are in $J$, is their sum also in $J$?
Is $J$ closed under multiplication by elements in $F[x]$? That is, if $p(x)\in F[x]$, is the product of $p(x)$ and $r(x)a(x)+s(x)b(x)$ in $J$?
The answer to both of these questions is 'yes,' and this is shown by remembering that the elements in $J$ are all possible linear combinations of the polynomials $a(x)$ and $b(x)$. See if you can put the pieces together.
Notice that we can take $p(x)=-1$ in the second question to show that $J$ is closed under negatives. |
H: Is there an easy way to get to a paper, given a citation?
This question isn't about math per se, but I hope it will be of general interest to people studying math so I feel reasonably comfortable asking here. Let me start with an example: Today I had the following citation from a paper:
W. Hurewicz, On Duality Theorems, abstract 47-7-329, Bull. AMS 47 (1941), 562-563.
I tried Googling this directly, but it only turned up papers citing that paper. I tried typing the title into JSTOR, but got a bunch of nonsense. Finally, I had to Google the homepage of Bulletins of the AMS, then click on "past issues," scroll down and find 1941, click on that, go back and figure out which issue it was, click on that, go to another page, scroll down and find the relevant article, and click on that.
I'm fully aware that this is of course less work than walking to the library uphill both ways in the snow, but when I know exactly what I want there should be some way of getting to it without clicking more than once, at least in theory.
Does anyone have a good workflow for grabbing a paper quickly given the relevant bibliographical information? I'm willing to install software if that's what it takes. What I'm hoping for is a box where I copy/paste the above citation and the paper pops right up.
AI: If you are affiliated with a university with the right subscriptions, you just go to
http://www.ams.org/mathscinet/
and search for your article. Last name of the author and a word or two from the title is usually enough. After you click on the article, there will be a small button far to the right associated to your library which will link you to online versions which you can download immediately, if they exist.
Your university's library's homepage should give you the details on how to log in on MathSciNet. Added: For me personally, my library requires me to go to http://www.ams.org.focus.lib.**MY-UNIVERSITY**.org/mathscinet/, log in with my university account, and it then redirects me to MathSciNet. I've simply bookmarked that address, and make sure I stay logged in, so using this bookmark redirects me directly to MathSciNet. Then I just need to search, click twice, and then I have the article. Your experience may differ.
(Unfortunately, MathSciNet did not have the specific article you referred to. It might be because it is too old. In my experience, all articles I've ever wanted to get which were published after 1950 have been on there.) |
H: Find values of the constants in the following identity: x^4+Ax^3 + 5x^2 + x + 3 = (x^2+4)(x^2-x+B)+Cx+ D
Another question on identities:
$$x^4+Ax^3 + 5x^2 + x + 3 = (x^2+4)(x^2-x+B)+Cx+ D$$
How can I find the coefficients for this?
I've got as far as multiplying out the brackets to get:
$$x^4+Ax^3 + 5x^2 + x + 3 = (x^4-x^3+Bx^2+4x^2-4x+4B)+Cx+ D$$
It would be useful to get a hint at least on where to go next.
AI: You're almost to the final solution. Just combine all coefficients of various powers or $x$ and compare each of them individually:
$$
x^4+Ax^3 + 5x^2 + x + 3 = x^4-x^3+ (B+4)x^2+(C-4)x+4B+ D
$$
You'll get
$$
\begin{align}
A &= -1 \\
B &= +5 - 4 \\
&= 1 \\
C &= +1 +4 \\
&= 5 \\
D &= 3 - 4B \\
&= -1
\end{align}
$$
Edit
To explain a little as OP mentioned:
The process is as simple as taking everything onto one side of $=$:
$$
(x^4 - x^4) + (A x^3 + x^3) + (5 x^2 - (B + 4) x^2 ) + (x - (C - 4)x ) + \left(3 - (4B + D)\right) = 0 \\
\implies (A + 1) x^3 + (1 - B) x^2 + (4 - C) x + (3 - 4B - D) = 0 \\
\therefore \pmatrix{ A + 1 \\ 1 - B \\ 4 - C \\ 3 - 4B - D } = \pmatrix{ 0 \\ 0 \\ 0 \\ 0 }
$$
which gives you the result. |
H: Does $((x-1)! \bmod x) - (x-1) \equiv 0\implies \text{isPrime}(x)$
Does $$((x-1)! \bmod x) - (x-1) = 0$$
imply that $x$ is prime?
AI: Yes. This is known as Wilson's theorem.
It's not very practical as a primality test, because the amount of calculation it requires is more than even the obvious tests. |
H: $a\in \mathbb{C}$ for which $[ \mathbb{Q}(a) : \mathbb{Q}(a^3) ] = 2$.
I want to find a $a\in \mathbb{C}$ for which $[ \mathbb{Q}(a) : \mathbb{Q}(a^3) ] = 2$.
Any ideas?
Thanks.
AI: Hint: Let $a$ be a primitive cube root of unity. |
H: Analytic extension for a a function defined in $\mathbb{N}$
I would like to know if it is possible to extend analytically any function of the type $f:\mathbb{N} \to \mathbb{C}$ to all complex plane. If it isn't possible, what should I assume to do so? If
Just an example: the function number of divisors of $n$.
EDIT: Is it unique?
AI: The answer is yes.
First, put $$p_n(z)=\frac{(z-1)(z-2)\cdots(z-n+1)}{(n-1)!}$$ and note that $p_n(k)=0$ for $k=1,2,\ldots,n-1$ while $p_n(n)=1$. Now let
$$ f(z)=\sum_{n=1}^\infty b_ne^{a_n(z-n)}p_n(z),\qquad b_n=f(n)-\sum_{k=1}^{n-1}b_ke^{a_k(n-k)}p_k(n). $$
Just make sure that $a_n>0$ is big enough so that, say,
$$|f(n)e^{a_n(z-n)}p_n(z)|<2^{-n}$$ for all $z$ with $|z|\le n-1$.
Then the series converges uniformly on every bounded set. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.