text
stringlengths 83
79.5k
|
---|
H: A basis for a subspace of $\mathbb R^3$
I have the following question:
Find the basis of the following subspace in $\mathbb R^3$:
$$2x+4y-3z=0$$
This is what I was given. So what I have tried is to place it in to a matrix $[2,4,-3,0]$ but this was more confusing after getting the matrix $[1,2,-3/2,0]$. This was done to get a leading 1. Now I solved for $x,y,z$.
$$1x + 2y - \frac3 2z = 0$$ from the matrix. Then
$x = \frac3 2z - 2y$
so
$$\left(\frac3 2z-2y, 2y, \frac3 2z\right)$$
Now I have $$(x,y,z) = z(3/2, 0, 3/2) + y(-2, 2, 0) \implies
{(3/2,0,3/2),(-2,2,0)}$$
which has two dimensions.
But this solution that I am giving is wrong. The actual answer according to my text book is ${(3,0,2),(-2,1,0)}$. Can any one please tell me where I messed-up? Thanks.
AI: You can get a basis quickly, as follows. You have $$2x+4y-3z=0$$.
This means, $$z=\frac{2x+4y}3$$
So if $\vec x$ solves the above, $$\vec x=\left(x,y,\frac 2 3 x+\frac 4 3 y\right)$$
That is, $$\vec x =x\left(1,0,\frac 23\right)+y\left(0,1,\frac 43\right)$$
So you can choose your basis to be $\{(3,0,2),(0,3,4)\}$ upon scaling.
In general, if you're working on $\Bbb R^3$; you know $ax+by+cz=0$ will be a subspace of dimension two (a plane through the origin), so it suffices to find two linearly independent vectors that satisfy the equation. To that end, make a coordinate vanish, say $x=0$, and find what $y,z$ may be. For example, in you case we get $4y-3z=0$, so we can take $y=3,z=4$, to get the first vector to be $(0,3,4)$. Now make $y=0$. You get $2x-3z=0$, so this suggests taking $x=3,z=2$. This gives the basis $$\{(0,3,4),(3,0,2)\}$$ as above. Note that if we chose to make say $z=0$; we could put $x=-2,y=1$ to get $$\{(-2,1,0),(3,0,2)\}$$
This works nicely because we have one equation, but wouldn't work so swimmingly if we had something of the form $$\begin{cases} 2x+3y-2z=0\\-x+y+3z=0\end{cases}$$
In fact, setting any coordinate to $0$ in the above gives nothing. However, setting say $z=1$ gives $x=11/5$ and $y=-4/5$, which correctly gives the solution set is $$\langle (11,-4,5)\rangle$$ |
H: $f(z)={2z+1\over 5z+3}$ maps
Define
$H^{+}=\{z:y>0\}$
$H^{-}=\{z:y<0\}$
$L^{+}=\{z:x>0\}$
$L^{-}=\{z:x<0\}$
$f(z)={2z+1\over 5z+3}$ maps
$1.$ $H^+\to H^+$ and $H^-\to H^-$
$2$. $H^+\to H^-$ and $H^-\to H^+$
$3.$ $H^+\to L^-$ and $H^-\to L^+$
$4.$ $H^+\to L^+$ and $H^-\to L^-$
If I take $z\in H^{+}$ then ${3\over z}\in H^{-}\Rightarrow {3\over z}+5={5z+3\over z}\in H^{-}$ Could any one tell me how to proceed then?
AI: Since $f$ maps $\mathbb R \to \mathbb R$ it is one of the first two options. Since $f(i)=(1+2i)/(3+5i)=(3-5i)(1+2i)/(\cdots)^2 = (i+\cdots)/\cdots$ it's 1. |
H: eHarmony combinatoric question, probability that I should get at least 1 compatible match.
Ok.. (as I type this with a smirk on my face) - in all seriousness I am trying to figure out, given 29 degrees of compatibility and 40 million members if I should be getting at least 1 match a day. There are of course a lot of variables, so I am trying to simplify things. I also see this as similar to the "birthday problem" in which one wants to see, out of n people, the probability of not having any birthday's in common.
In this case however, we don't care if others get a match, only that I do (muahahaha!), so I am guessing this is, instead of being in line with $_{n}C_{k}$ this is $_{n}C_{1}$ Where I am the $1$!
This is a bit more advanced in that we are looking at the "percentage of compatibility". So I am going start by taking the 29 degrees of freedom and looking at things in a binary way i.e. either a match is compatible on a degree of freedom (same answer to a question) or they are not. Now by my calculations there are 29 questions, so as with light switches, there are $2^{29} = 536870912$ ways to answer the questions. So if half the eH members are women, then there is a 20,000,000/536,870,912 = 3.7% chance that I would answer exactly as a potential match.
It seems there should or could be much more to it in that say I went about the computation from the point of view that the probability of not answering the questions the same. So I think I would have something such as $$(1-2^{28}/2^{29})(1-2^{27}/2^{29})(1-2^{26}/2^{29})\space...\space(1-2^{0}/2^{29})$$
which seems to reduce to: $$((2^{28})(3 \cdot 2^{27})(7 \cdot 2^{26}) \space ... \space(2^{2}(2^{27}-1))(2^{1}(2^{28}-1)) \cdot (2^{29}-1))/(2^{29})^{29}$$ after multiplying the numerator by $(2^{29})^{29}$.
I am not sure what this reduces too - hopefully more matches than I am currently getting..
I wonder though if I am on the right track? I however wonder about the 20,000,000. If I were to take the possible "state spaces" or the $2^{29}$ possible choices for the 29 degrees of freedom (treated as binary yes/no), as it were the possible 365 days in a year birthday state space, then I come up with:
$$\frac{2^{29}!}{(2^{29})^{20,000,000}\cdot (2^{29} - 20,000,000)!}$$
which just seems insane.
Any thoughts (or dating advice ha!)
Thanks,
Brian
AI: The probability of at least 1 match is $$1- \mathsf{\text{probability of no matches}} = 1-(1-p)^n$$ where $p = \frac{1}{2^{29}}$ and $n=20000000$. Assuming the choices are equaly likely, and the candidates choose independently. Approximately 3.66%. |
H: Decompose the group $U_{60}$ as direct product of cyclic groups
Decompose the following group as a direct product of cyclic groups: $U_{60}$.
Here is what i've done so far:
$60 = 2\cdot2\cdot5\cdot3$.
Therefore the answers are
$U_{60} \cong C_2\times C_2\times C_3\times C_5$ or $U_{60}\cong C_4\times C_3\times C_5$.
Sorry about the format.
AI: Notice first that $U_{60}$ is not the same as $C_{60}$. $U_{60}$ is the group consisting of those numbers $1\le k<60$ that satisfy $gcd(k,60)=1$, and the group operation is multiplication modulo $60$. So the size of $U_{60}$ is considerably smaller than $60$. Here is what you need to do:
Find out how many elements $U_{60}$ does have (hint: count them! There are not too many so you can (and should!) just find all of them. Then you'll have all the elements in the group in front of you (general remark: there is a formula for the number of elements in $U_n$, look up Euler's totient function).
Now, notice that $U_{60}$ is abelian (just like any other $U_n$). Now you know how many elements it has so decompose that number into prime factors and proceed to use the fundamental theorem of finite abelian groups. |
H: prove that the order of the elements of restricted direct product is finite .
if $G_i = \Bbb Z/p_i \Bbb Z$
($\Bbb Z$ means integers)
where $p_i$ is the ith integer prime , I=the positive integers
show that , every element of the restricted direct product of the $G_i$'s has finite order
my trial to solve it ,
i supposed that $S\subset I$ is the indes of the restricted direct product .
let $S=${$i,i+1,i+2,...,m-1,m$}
$ G = G_i \times G_{i+1} \times G_{i+2} \times ... \times G_m \times G_1 \times G_2 \times ... \times G_{i-1} \times G_{m+1} \times ... $
let $g=(1_i ,1_{i+1} , 1_{i+1} , ... ,1_m , g_1 ,g_2 ,...,g_{i-1} , g_{m+1} , ... ) \in G$
where , $1_i$ is the identity of the ith group , $ g_1 \in G_i$
now the problem is that to compute the order of this element g , i have to calculate l.c.m of the orders of the elements $g_1 , g_2 , ... ,g_{i-1} , g_{m+1} , ... $
and those orders are infinte numbers of primes so the l.c.m can't be finite !
so the order must be infinte .
but i'm asked to prove it's finite !
so i think i made a error in my definitions " may be i understand the definition of restricted wrongly , i'm not sure "
any help plz , thanx
AI: A "tuple" in the restricted direct product only differs from the identity on finitely many coordinates. So, you will be able to find the LCM of orders of coordinates to find the order of an element, as you began to do.
I think a good way to understand the restricted direct product is as a subgroup of the full direct product. The direct product, as you know, is just all "vectors" of elements where the $i$'th entry is from $G_i$, and you make this into a group with pointwise operations.
The restricted direct product is a subgroup of that consisting of elements which are the identity except possibly on finitely many coordinates. You can easily see that the two products share the identity, and that the elements I described are closed under multiplication and inversion.
In fact, I think you will have a good time showing that the restricted direct product is a normal subgroup of the direct product :) |
H: Understanding mathematics imprecisely
For a long time, it has been a complete mystery to me how any of my peers understood any math at all with anything short of filling in every detail, being careful about every set theoretic detail down to the axioms. That's a slight exaggeration, but I certainly did much worse in courses where I attempted to replicate this myself by not reading every proof to save time.
It's only recently, and only within the field of probability theory, that I've developed the ability to do this myself. I am currently following Grimmett's book on Percolation theory. There are far too few details for someone of my level to fill it in completely, but I am getting more than nothing out of it.
Question 1: I would like to learn how to get even more out of such "incomplete" studying.
What tends to happen even now, and more before, is that as soon as I don't understand something, I lose focus and everything just flies over my head. I imagine this is partly psychological, since from a logical perspective if I have to accept proposition $P$ to derive $Q$, I could just think of myself as having proven merely that $P$ implies $Q$, and then no "acceptances" are being made.
Most of the time, professors simply stare blankly at me, wondering how I could persist like this, and all they say is to stop. But it's not that simple, because it appears my intuition is also primarily symbolic. Sure, I think of some geometric pictures when they're called for, but most of my problem-solving creativity comes from pattern matching methods and tricks with situations.
Question 2: How does one distill out the important ideas of a mathematical reading, such as a proof or paper?
Grimmett's book is very helpful in this regard. He will always tell me what's important, and as long as I'm willing to believe him, then I don't have to do anything. But what if I need things that are different than he emphasizes? I always worry that by not understanding everything, I will eventually reach some point in my life when I need to use some fact/method I glossed over and forgot, and that it could be framed in such a way that I would not even be aware of what is missing. That way I wouldn't even be able to do a huge review to rescue the fact from the depths of my ignorance. My current way of thinking about this subproblem of question 2 is that mathematicians always take this risk by not studying everything. So it's a risk-minimization game with time as the constraint. If so, how do I make smart choices with regard to this game?
Question 3: With my recent ability to learn imprecisely in probability, I've started to see many connections, even with outside fields. Many of them are probably fictitious. Many of the questions that I think are highly-motivated might actually be not really worth answering. How does one decide what questions are interesting? As a graduate student who has barely popped out above what the traditional classroom has to offer, I am very lost in this regard.
Question 4: The revelations that enabled me to understand math imprecisely came all at once. A similar comment about the abruptness of my coming upon the ability to proof-check without significant error could be made 2 years ago. Most of my peers seem to learn rather continuously, but the evolutions of my way of thinking seem to come all at once. Is there anything bad or good about this? If so, how do I minimize the bad and maximize the good?
As always, answers to subsets are appreciated.
AI: Full understanding is illusory. If you pursue it, you will find yourself trying to say what a number is, or a set, and digressing into the problem of making language, which for math is a meta-language, precise. And, of course, that can't be done.
So regarding your first question, it might help to observe how futile that innate wish of yours is, and how much you understand without full understanding (or compunction) in all other aspects of your life.
Imagine trying to learn biology and studying the chemical processes in the body, then asking "what is a chemical". You are given an answer that has to do with molecules, a term which you then inspect for precision's sake. Atoms come up, then electrons. Eventually you are learning quantum physics when all you wanted to do was understand how allergens work, or some such thing.
You must operate at the appropriate level for a specific problem. It's no use to reinvent the wheel and do everything from first principles. That would be like writing every program in machine code.
One day, our brains may be augmented with enhancements that allow us to have enough knowledge to understand everything down to our "axioms". Until then, it is a matter of becoming comfortable with our limitations and trying to work with what we have to be awesome.
In terms of knowing which questions are interesting, I think that is one of the harder parts of research. One almost has to be prescient.
And as for getting the important ideas of a proof, my first answer is that sometimes you can't really. Some proofs are just a confluence of numerical estimations and limit results and don't give any real insight into what is going on. Since you seem to be a probabilist, I would point to the proof that a random walk in dimension $n$ is recurrent for $n=1,2$ and transient otherwise. One feels there should be an intuitively understandable reason, but all one gets is Stirling's formula.
For other proofs it is a matter of becoming comfortable enough with the terminology and techniques used in the proof (by re-reading) to see the forest for the trees. In Kung Fu one talks about "learning to forget". You learn the movements carefully so you can perform them without thinking about them when the time comes. You do the same when you learn to integrate or differentiate - you don't want to be doing this from the limit definition when crunch time comes (exam say). |
H: Converting parametric equations to xy equation
I have these parametric equations:
$$
x=1+t \\
y=1-t
$$
How do I make this to the xy equation?
David
AI: Use the first equation to find an expression for $t$ in terms of $x$.
Substitute into the second equation to get an equation only involving $x$ and $y$.
Rearrange if desired.
Update: This method will work for the example you gave, but might not work for a general example. For example, the following is a particularly common parametrization:
\begin{align}
x &= \cos t \\
y &= \sin t
\end{align}
Here, you can't get a unique expression for $t$ in terms of $x$: for example, if $x$ is $1$, we could have $t=0$ or $t=2\pi$. In this case, the solution is to square both equations to give:
\begin{align}
x^2&=\cos^2 t\\
y^2&=\sin^2 t
\end{align}
Then add them together to get
$$
x^2+y^2=\cos^2t+\sin^2t=1
$$
This equation ($x^2+y^2=1$) is the equation for a circle of radius $1$ about $(0,0)$.
Edit: (answering the question in comment below) Sometimes you're presented with a parametrization like
\begin{align}
x&=1+t\\
y&=1-t
\end{align}
and you want to convert that into a vector equation. A mathematician would immediately recognize that system of parametric equations as a vector equation: $x$ and $y$ are themselves coordinates of the vector $\begin{pmatrix}
x\\
y
\end{pmatrix}$. The equations then become:
$$
\vec{r}=\begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}1+t\\1-t\end{pmatrix}
$$
and you can rearrange that into whatever (vector) form you like using the usual rules for addition of vectors. |
H: Question on the proof of a subspace of Polish space is Polish, iff it's a $G_\delta$ set.
Suppose, $X$ is a Polish space, $Y$ is a Polish subspace of $X$. $\{U_n\}_{n \in \Bbb N}$ is a basis of open sets of $X$.
Let $A = \{ x \in \overline {Y} : \forall \epsilon \exists {n}(x \in U_n \land \operatorname{diam}{(Y \cap U_n)} < \epsilon) \}$
$\overline {Y}$ is the closure of $Y$. $\operatorname{diam}{(Y \cap U_n)}$ is the diameter of $Y \cap U_n$.
Why $A$ is different from $\overline {Y}$?
Added: The proof is from an online note (page 11)about descriptive set theory. Since it's posted free online by the author, I take the liberty to paste a screenshot of it for convenience.
AI: Let's look at an example. Let $X = \{1, 1/2, 1/3, \dots, 0\}$ with the Euclidean topology, and take $Y = X \setminus \{0\}$. Then the discrete metric $d(x,y) = \delta_{xy}$ on $Y$ is complete and compatible with the subspace topology. Now $0 \in \bar{Y}$. However, if $U_n$ is any open neighborhood of 0 (in the topology of $X$), then it contains infinitely many points of $Y$; in particular, at least two points, so $\operatorname{diam}(U_n \cap Y) = 1$ (where the diameter is computed with respect to $d$). Therefore $0 \notin A$, and we see explicitly that $\bar{Y} \ne A$. |
H: For which $\alpha$ this sum converges? $\sum_{n=3}^{\infty} {\frac{1}{n \cdot \ln(n) \cdot \ln(\ln(n))^{\alpha}}}$
Given:
$$\sum_{n=3}^{\infty} {\frac{1}{n \cdot \ln(n) \cdot \ln(\ln(n))^{\alpha}}}$$
I am asked: For what values of $\alpha$ does this summation converge?
So I said, $f(n) = \frac{1}{n \cdot \ln(n) \cdot \ln(\ln(n))^{\alpha}}$. $f(n)$ is obviously monotonically decreasing function. Then, by using the integral test, this summation converges if and only if $I = \int_{3}^{\infty} {\frac{dn}{n \cdot \ln(n) \cdot \ln(\ln(n))^{\alpha}}}$ converges. (has a value)
But I am finding this very hard to go on with. Any direction will be appreciated!
AI: By the Cauchy Condensation Test:
$$\sum_{n=3}^{\infty} {\frac{1}{n \cdot \ln(n) \cdot \ln(\ln(n))^{\alpha}}} \text{ converges} \iff $$
$$\sum_{n=3}^{\infty} {\frac{2^n}{2^n \cdot \ln(2^n) \cdot \ln(\ln(2^n))^{\alpha}}} =
\sum_{n=3}^{\infty} {\frac{1}{n \ln(2) \cdot \ln(n\ln(2))^{\alpha}}} \text{ converges} \iff $$
$$\sum_{n=3}^{\infty} {\frac{1}{(n\ln(2)+\ln(\ln(2)))^{\alpha}}} \text{ converges} \iff \alpha>1$$ |
H: Criteria for a function to be plottable
Assume that $f$ is a real function. My question is how can one decide if $f$ is plottable or not? My assumption is that $f$ must be of class $C^1$, but I am not aware of such a result. My assumption is based on the fact that there are differentiable functions that can not be plotted (e.g.,
$$
f:\mathbb{R}\rightarrow\mathbb{R},\quad f(x)=\left\{
\begin{array}
[c]{l}%
x^{2}\sin\dfrac{1}{x},~\text{if }x\neq0\\
0,~\text{if }x=0
\end{array}
\right.
$$
is not plottable in the neighborhood of $0$, since it's derivative does not have limit at $0$, hence the slope of the tangent to the graph varies in an "uncontrolable" fashion.
Edit: The problem is that I don't have an "official" definition for what "plottable" means, I only have an intuitive understanding of the concept. Also, it is clear that there are plotable functions which have points where they are not differentiable, e.g., $|x|$, so my requirement of being $C^1$ is not valid. Maybe, i should have said "picewise $C^1$".
AI: Without a precise definition of plottable, we cannot give a precise classification of the functions that are. Some things that can get in the way:
1) Too many discontinuities: The Dirichlet function, $1$ on the rationals and $0$ on the irrationals is an example. Roughly speaking, I would say we can handle a finite number of discontinuities as long as they are not to close together
2) Too much local variation: Even $\sin \frac 1x$ on $[\frac 1{100000},1]$ has so much variation that it will be a blur. This one is $C^{\infty}$
3) To much variation of scale: Think of taking a square wave with amplitude $1000$ and adding $\sin x$ to it. Of course we can round off the corners to get continuity. But to see what is going on, you need to see the square wave, and then the sine wave will be too small to see. You can have similar problems in the $x$ direction. Maybe you have stuff going on close to the origin, then around $x=100000$.
4) As a takeoff of 3, too much detail. There are only so many pixels on a piece of paper. If there is too much important stuff, some will get lost. Think of the Mandelbrot set (not a function but it gives the idea)
5) Too much global variation. What do you do with $e^{1000x}\sin x?$
I think the real answer is that almost no functions are plottable, but the ones we use are selected to usually be plottable. This is like the fact that almost every function is discontinuous, but most of the ones you see in class will be continuous. |
H: Hall $\pi$-subgroup and $HN=G$
Let $\pi$ be any set of prime numbers. A finite group $H$ is a $\pi$-group if all primes that divide $|H|$ lie in $\pi$. If $|G|<\infty$, then a Hall $\pi$-subgroup of $G$ is a $\pi$-subgroup $H$ such that $|G:H|$ is divisible by no prime in $\pi$. Let $\phi$ be a homomorphism defined on $G$.
If $H\subseteq G$ is a Hall $\pi$-subgroup, show that $\phi(G)$ is a $\pi$-group iff $HN=G$, where $N=\text{ker}(\phi)$.
If $HN=G$, then $\phi(H)=\phi(G)$ since $\phi(N)=1$, and since $\phi(H)$ divides $H$, which is a $\pi$-subgroup, we must have that $\phi(G)$ is a $\pi$-group.
But what about the other direction? Assume that $\phi(G)$ is a $\pi$-group. This means all primes dividing $|\phi(G)|$ are in $\pi$, all primes dividing $|H|$ are in $\pi$, and $|G:H|$ is divisible by no prime in $\pi$. We have $|G| = |H||G:H|$, and $|\phi(G)|$ divides $|G|$. How can we prove that $HN=G$ from here?
AI: $N\subset HN$, hence $[G:HN]=[\phi(G):\phi(HN)]$. But $[G:HN]$ is "$\pi$-free", hence $[\phi(G):\phi(HN)]$ is "$\pi$-free". But $\phi(G)$ is a $\pi$-group... |
H: Solve system of first order differential equations
I have to solve differential systems like this:
$$
\left\{
\begin{array}{c}
x' = 3x - y + z \\
y' = x + 5y - z \\
z' = x - y + 3z
\end{array}
\right.
$$
Until now I computed the eigenvalues $k = \{2,4,5\}$ by solving the equation resulted from this determinant of this matrix:
$$
\begin{pmatrix}
3 && -1 && 1 \\
1 && 5 && -1 \\
1 && -1 && 3
\end{pmatrix} - kI_3 =
\begin{pmatrix}
3-k && -1 && 1 \\
1 && 5-k && -1 \\
1 && -1 && 3-k
\end{pmatrix}
$$
I don't know what to do next.
NOTE: This is an example but from it I want to learn the method to solve any system of this kind. I learn better from particular examples than directly from generalization.
AI: To find the eigenvalues of: $$A =\begin{bmatrix}3 & -1 & 1\\1 & 5 & -1\\1 & -1 &3\end{bmatrix},$$
we set up $|A - \lambda I| = 0$ and solve the characteristic polynomial, so we have:
$$|A -\lambda I| = \begin{bmatrix} 3-\lambda & -1 & 1\\1 & 5-\lambda & -1\\1 & -1 &3 -\lambda \end{bmatrix} = 0$$
From this, we get the characteristic polynomial as: $$-\lambda^3+11 \lambda^2-38 \lambda+40 = -(\lambda-5) (\lambda-4) (\lambda-2)= 0$$
This gives us three eigenvalues: $λ_1 = 2, λ_2 = 4$ and $λ_3 = 5$
To find the eigenvectors, we set up and solve $[A - \lambda_i I]v_i = 0$, so lets do $\lambda_1 = 2$. We have:
$$[A - \lambda_i I]v_i = \begin{bmatrix} 3-2 & -1 & 1\\1 & 5-2 & -1\\1 & -1 &3 -2 \end{bmatrix}v_1 = \begin{bmatrix} 1 & -1 & 1\\1 & 3 & -1\\1 & -1 & 1 \end{bmatrix}v_1 = 0$$
This leads to the row-reduced-echelon form:
$$\begin{bmatrix} 1 & 0 & \dfrac{1}{2}\\0 & 1 & -\dfrac{1}{2}\\0 & 0 & 0 \end{bmatrix}v_1 = 0$$
This gives us a solution of:
$b = \dfrac{1}{2}c \rightarrow c = 2 \rightarrow b = 1$, and $a = -\dfrac{1}{2} c \rightarrow a = -1$.
Thus the eigenvector for the eigenvalue $\lambda_1 = 2$ is $v_1 = (-1, 1, 2)$.
If we repeat this process two more times for the other two eigenvalues, we end up with the eigenvalue/eigenvector pairs:
$$\lambda_1 = 2, ~v_1 = (-1, 1, 2)$$
$$\lambda_2 = 4, ~v_2 = (1, 0, 1)$$
$$\lambda_3 = 5, ~v_3 = (1, -1, 1)$$
Note that different approaches are needed when we get into repeated eigenvalues, but that is for another problem.
Try to derive those last two eigenvectors and report back if they do not work out.
To write out the solution for the three equation, we would use a linear combination of the the eigenvalues, eigenvectors and some unknown constants, as:
$$X(t) = \begin{bmatrix} x(t) \\ y(t) \\ z(t) \end{bmatrix} = c_1e^{2t} \begin{pmatrix} -1 \\ 1 \\ 2 \end{pmatrix} + c_2 e^{4t} \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + c_3 e^{5t} \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}$$ |
H: Does $\int_{0}^{\infty} \frac{\sin (\tan x)}{x} \ dx $ converge?
Does $ \displaystyle \int_{0}^{\infty} \ \frac{\sin (\tan x)}{x} dx $ converge?
$ \displaystyle \int_{0}^{\infty} \frac{\sin (\tan x)}{x} \ dx = \int_{0}^{\frac{\pi}{2}} \frac{\sin (\tan x)}{x} \ dx + \sum_{n=1}^{\infty} \int_{\pi(n-\frac{1}{2})}^{\pi(n+\frac{1}{2})} \frac{\sin (\tan x)}{x} \ dx $
The first integral converges since $\displaystyle \frac{\sin (\tan x)}{x}$ has a removable singularity at $x=0$ and is bounded near $ \displaystyle \frac{\pi}{2}$.
And $ \displaystyle \int_{\pi(n-\frac{1}{2})}^{\pi(n+\frac{1}{2})} \frac{\sin (\tan x)}{x} \ dx$ converges since $\displaystyle \frac{\sin (\tan x)}{x}$ is bounded near $\pi(n-\frac{1}{2})$ and $\pi(n+\frac{1}{2})$.
But does $ \displaystyle \sum_{n=1}^{\infty} \int_{\pi(n-\frac{1}{2})}^{\pi(n+\frac{1}{2})} \frac{\sin (\tan x)}{x} \ dx$ converge?
AI: Yes.
Note that
$$\begin{align} I_n:=\int_{\pi(n-\frac12)}^{\pi(n+\frac12)}\frac{\sin(\tan(x))}{x}\,\mathrm dx&=\int_0^{\frac\pi2}\left(\frac1{n\pi+x}-\frac1{n\pi-x}\right)\sin(\tan(x))\,\mathrm dx\\&=\int_0^{\frac\pi2}\frac{-2x}{n^2\pi^2-x^2}\sin(\tan(x))\,\mathrm dx\\
\end{align}$$
With $A:=\int_0^{\frac\pi2}\max\{-2x\sin(\tan (x)),0\}\,\mathrm dx$, $B:=\int_0^{\frac\pi2}\min\{-2x\sin(\tan (x)),0\}\,\mathrm dx$ (which both converge), we can thus estimate
$$ \frac{1}{n^2\pi^2-\pi^2/4}B+\frac{1}{n^2\pi^2-0}A\le I_n\le \frac{1}{n^2\pi^2-\pi^2/4}A+\frac{1}{n^2\pi^2-0}B,$$
The difference between these bounds and $\frac1{n^2\pi^2}(A+B)$ is governed by
$$\frac{1}{n^2\pi^2-\frac{\pi^2}{4}}-\frac1{n^2\pi^2}=\frac1{4\pi^2 n^4-\pi^2 n^2},$$
hence
$$ I_n=\frac1{n^2\pi^2}(A+B)+O(n^{-4}).$$
We conclude that $\sum I_n$ converges at least as good as $\sum\frac1{n^2}$. |
H: Question about long proofs?
How do people write 50-page long proofs (and longer)? So they have a target in mind, but I can't get my head around them foreseeing that these 50 pages of work will actually lead them to exactly their target.
AI: There are two cases:
You start by noticing something, then you prove a small claim, and slowly you add more and more claims, until you end up with a big proof of something impressive.
You set out to prove a certain thing, then you say "Ah, if only X was true", and when you think about it you realize that X is true, and you prove that. After some finitely many iterations you end up with a complete proof spanning over 50 pages or so.
Sometimes you have to develop an entirely new technology in order to prove something. Then explaining it, and proving all its basics can end up in a small book.
Of course, often when writing complicated proofs one may feel the need to add some less trivial introductory parts, which take more pages.
All in all mathematics is not something that you do from one day to the next, but rather a huge project that you never finish in your lifetime. And the proofs just accumulate. |
H: integrate function with change of variable
Find the primitive of $\;\displaystyle \int x^2 \sqrt{x+1}$ $dx$
So (...)
$u = x + 1 \quad \iff \quad u - 1 = x$
$u' = 1 \quad \iff \quad \frac{du}{dx} = 1 \rightarrow \;du = dx$
$$\int(u - 1)^2 . u^\frac12 \; du \;\;= \;\; {{(u-1)^3}\over3} \cdot {u^{3/2}\over{3/2}} + C \;\; =\;\; {1\over3} x^3\cdot{2\over3}(x+1)^{3/2} + C$$
Is this correct?
AI: The substitutions will work fine...but your evaluation is problematic.
Most problematic is the fact that you are integrating each factor of a product in the integrand, and expressing this as the product of integrated factors: which you cannot do, unless you are using, say, integration by parts, which proceeds much differently: . I.e. $$\int [f(x)\cdot g(x)]\,dx \;\neq \;\int f(x) \,dx \cdot \int g(x)\,dx$$
So, let's start from the point after which we've substituted:
$$\int \underbrace{(u - 1)^2}_{\text{expand}} . u^{1/2} \; du \; = \;\int \underbrace{(u^2 - 2u + 1)u^{1/2}}_{\text{distribute}} \,\;du = \int \left(u^{5/2} - 2u^{3/2} + u^{1/2}\right) \,du$$
Now integrate, and then back-substitute. |
H: The Language of the Set Theory (with ZF) and their ability to express all mathematics
Accordingly, the Language of Set Theory (in this case using $ZF$ axioms) is built up with the aim to express all mathematics. Now, I know that, for example, the construction of the numbers ($\mathbf{ \mathbb{N},\mathbb{Z}, \mathbb{Q}, \mathbb{R}, \mathbb{C}}$) is got right from the axioms and definitions from the Language of Set Theory, so everything is inside the same theory. But, when we are dealing with especial structures like for example Euclidian Geometry, we need to state new axioms, and, apparently, it doesn't make any sense to add this axioms to our original set theory because it might be possible to find a different model, maybe completly artificial, in which some axiom of Euclidian geometry migth not hold or result in a contradiction. But Euclidian Geometry and all the different theories that one might think of are part of mathematics. So my point is that it is impossible to develop all of mathematics just from the axioms of set theory. Does it make sense? Please tell me if I'm wrong and if I'm missing something. Also I have these questions:
In building these new theories (like euclidean geometrie, etc), each theory uses the language of set theory. But also we add new axioms. So, are we talking of a different language? Are we talking of a different theory? Is the language of set theory a metalanguage in this new language? I'm totally confused.
AI: The way one can develop both Euclidean and non-Euclidean geometry in set theory is by considering the objects of geometry, such as points and lines, as kinds of sets. For example, it is natural to consider a line as a set of points. It is not so natural, but still possible, to consider a point as a set. It doesn't matter much how we do this, only that it is possible via various formalisms. For example, a point $(x,y)$ in $\mathbb{R}^2$ can be considered as the pair $\{\{x\},\{x,y\}\}$ according to Kuratowski's definition of the ordered pair in set theory, and then the real numbers $x$ and $y$ can be considered as sets of rationals according to the definition of $\mathbb{R}$ in terms of Dedekind cuts, and so on.
With a bit more work, one can describe models of non-Euclidean geometry such as the pseudosphere in terms of sets. One way to do this is to first embed these models into a higher-dimensional ambient Euclidean space. A more abstract way is to use the notion of a manifold, which can still be described purely in terms of sets, but this involves so many steps now that most people would not want to do this. Still, experience with such formalizations shows that it can be done in a straightforward (if tedious) manner.
In any case, one does not have to add axioms to $\mathsf{ZF}$ to study the Euclidean or non-Euclidean case. The axioms of Euclidean or non-Euclidean geometry simply pick out subclasses of the class of manifolds. Once you pick a formalization of "manifold" in terms of sets, it is a consequence of $\mathsf{ZF}$ that there are many examples of both types of manifold.
It is true that some things are not decided by $\mathsf{ZF}$, such as the Continuum Hypothesis. However, the statements "there are Euclidean manifolds" and "there are non-Euclidean manifolds" are both simply theorems of $\mathsf{ZF}$. Note that in the context of $\mathsf{ZF}$ it does not make sense to ask whether the universe itself is Euclidean, because the universe of sets is not a manifold at all. |
H: homogeneous nonlinear functions
Give an example of degree one positively homogeneous function, (i.e. a function $f$, such that $\forall \alpha\ge0, f(\alpha x) =\alpha f(x)$) that is not linear, and $f: \mathbb{R} \to \mathbb{R}$.
AI: A simple example would be $f(x)=\left|x\right|$.
About the hint in my comment: For $x\in\mathbb{R}$, $\left|x\right| = \sqrt{x^2}$.
Actually the most general function with the desired property would be $f(x)=\alpha x+\beta\left|x\right|$ with $\alpha, \beta \in\mathbb{R}$ arbitrary real constants. |
H: Given a group homomorphism $f:G\to H$, if $m$ is relatively prime to $|H|$ and $x^m\in\ker f$, then $x\in \ker f$
Let $f:G\to H$ be a homomorphism, and let $m$ be an integer such that $m$ and $|H|$ are relatively prime. For any $x \in G$, if $x^m \in \ker f$, then $x \in \ker f$.
My proof step: if $x^m \in \ker f$, $e=f (x^m)=( f(x))^m$, $f(x)\in H$, $\operatorname{ord} f (x)\leq m$
Any suggestions? Thanks.
AI: Ok so $e = f(x^m) = f(x)^m$ by definition.
This tells you that $f(x)$ has order dividing $m$ in $H$. But Lagrange's theorem tells us that the order of $f(x)$ divides $|H|$. Since they are coprime this order must be $1$, hence $f(x) = e$, telling us that $x$ is in the kernel. |
H: The graph of Borel measurable function whose range is a separable metrisable space
If $Y$ is separable and $f : X \to Y$ is Borel measurable, then the graph of
$f$ is Borel.
On page 14, Lemma 2.3, (iii) of this online note, given, $\{U_n\}_{n \in \Bbb N}$, a basis for the topology of a metrisable space $Y$, the graph of $f$ is:
I can't see why this formula represents the graph of $f$. It seems to me it's an empty set.
AI: You're right; it's empty, as $U_n$ cover $Y$.
It's a typo, most likely meant to be
$$\bigcap_{n=0}^\infty \left( \{(x,y)\mid y\notin U_n\} \cup \{(x,y) \mid x\in f^{-1}[U_n]\} \right)$$ |
H: Meaning of "defined"
What are the precise meanings of terms "defined", "well defined" and "undefined", etc.? We can't define what "defined" means since then we would run into circular definitions. (If definitiveness is established in terms of something else like "existence", then again we face the same problem.)
AI: Mathematics is not about what "define" means in English or a natural language - that is a subject for philosophy or for the study of language. But we can use natural language to explain what it means to define something in mathematics.
The most common type of definition in mathematics says that any object with a certain collection of properties is given a certain name. For example a polygon with three corners is named a triangle. These definitions are trivial in the sense that you could replace the defined word with its definition, and although it might be inconvenient there would be no formal loss of meaning.
The term well defined is used in some settings where we want to make a definition, but there is a worry that the "definition" is faulty. It is used most often when we define a function on an entire equivalence class by looking at a representative of the equivalence class; in this case we have to argue that the function gives the same value no matter which representative is chosen, and we then say that the function is "well defined". |
H: Probability Distribution of Rolling Multiple Dice
What is the function for the probability distrabution of rolling multiple (3+) dice. The function is a bell curve but I can't find the actual function for the situation. Example, what is the function for rolling 50 6 sided dice?
EDIT: A six sided die returns an integer value from 1 to 6 inclusive. I am trying to find a function to add multiple dice together.
AI: In this answer, there is a section titled "Summing Dice". It describes how convolution of the discrete function that is $1$ for each integer from $1$ through $6$, and $0$ otherwise, yields the distribution for the sum of $n$ six-sided dice.
Rolling $50$ six-sided dice will yield an approximately Normal Distribution whose mean is $\mu=50\times\frac72$ and whose variance is $50\times\frac{35}{12}$; thus, a standard deviation of $\sigma=\sqrt{50\times\frac{35}{12}}$.
Mean and Variance of a Single Die
The mean of a single die whose faces vary from $1$ to $n$ is $\frac{n+1}{2}$. For $n=6$, this gives $\frac72$.
The variance of a single die whose faces vary from $1$ to $n$ is "the mean of the squares minus the square of the mean." The sum of the squares from $1$ to $n$ is $\frac{2n^3+3n^2+n}{6}$, so the mean is $\frac{2n^2+3n+1}{6}$. Subtracting $\frac{n^2+2n+1}{4}$ yields $\frac{n^2-1}{12}$. For $n=6$, this gives $\frac{35}{12}$. |
H: Morphisms between locally ringed spaces and affine schemes
I need some hints to understand the conclusion of the proof of the following lemma from the Stacks Project:
Lemma $\mathbf{6.1.}\,$ Let $X$ be a locally ringed space. Let $Y$ be an affine scheme. Let $f\in\operatorname{Mor}(X,Y)$ be a morphism of locally ringed spaces. Given a point $x\in X$ consider the ring maps $$\Gamma(Y,\mathcal O_Y)\xrightarrow{f^{\#}}\Gamma(X,\mathcal O_X)\to\mathcal O_{X,x}$$ Let $\mathfrak p\subset\Gamma(Y,\mathcal O_Y)$ denote the inverse image of $\mathfrak m_x$. Let $y\in Y$ be the corresponding point. Then $f(x)=y$.
Proof. Consider the commutative diagram
$$\require{AMScd}\begin{CD}\Gamma(X,\mathcal O_X) @>>> \mathcal O_{X,x}\\ @AAA @AAA \\ \Gamma(Y,\mathcal O_Y) @>>> \mathcal O_{Y,f(x)}\end{CD}$$ (see the discussion of $f$-maps below Sheaves, Definition $21.7$). Since the right vertical arrow is local we see that $\mathbf m_{f(x)}$ is the inverse image of $\mathbf m_x$. The result follows.$\qquad\qquad\qquad\qquad\quad\square$
I don't understand the last sentence, i.e. "The result follows". If $m_{f(x)}$ (the maximal ideal of $\mathcal O_{Y,f(x)}$) is the inverse image of $m_x$, why should be $f(x)=y$?
Thanks in advice.
AI: $Y$ is an affine scheme, so $f(x)$ is a prime ideal of $\mathscr{O}_Y(Y)$ and $\mathscr{O}_{Y,f(x)}$ is the localization of $\mathscr{O}_Y(Y)$ at $f(x)$. A basic fact about localization is that the inverse image in $\mathscr{O}_Y(Y)$ of the unique maximal ideal $\mathfrak{m}_{f(x)}$ of $\mathscr{O}_{Y,f(x)}$ is $f(x)$. Now it follows from the commutativity of the diagram, together with the fact you mentioned, which amounts to the map on stalks being local, that $f(x)$ is the inverse image in $\mathscr{O}_Y(Y)$ of $\mathfrak{m}_x$ under the map $\mathscr{O}_Y(Y)\rightarrow\mathscr{O}_X(X)\rightarrow\mathscr{O}_{X,x}$. |
H: Is it possible to simplify $\frac{\Gamma\left(\frac{1}{10}\right)}{\Gamma\left(\frac{2}{15}\right)\ \Gamma\left(\frac{7}{15}\right)}$?
Is it possible to simplify this expression?
$$\frac{\displaystyle\Gamma\left(\frac{1}{10}\right)}{\displaystyle\Gamma\left(\frac{2}{15}\right)\ \Gamma\left(\frac{7}{15}\right)}$$
Is there a systematic way to check ratios of Gamma-functions like this for simplification possibility?
AI: Amazingly, this can be greatly simplified. I'll state the result first:
$$\frac{\displaystyle\Gamma\left(\frac{1}{10}\right)}{\displaystyle\Gamma\left(\frac{2}{15}\right)\Gamma\left(\frac{7}{15}\right)} = \frac{\sqrt{5}+1}{3^{1/10} 2^{6/5} \sqrt{\pi}}$$
The result follows first from a version of Gauss's multiplication formula:
$$\displaystyle\Gamma(3 z) = \frac{1}{2 \pi} 3^{3 z-1/2} \Gamma(z) \Gamma\left(z+\frac13\right) \Gamma\left(z+\frac{2}{3}\right)$$
or, with $z=2/15$:
$$\Gamma\left(\frac{2}{15}\right)\Gamma\left(\frac{7}{15}\right) = 2 \pi \,3^{1/10} \frac{\displaystyle\Gamma\left(\frac{2}{5}\right)}{\displaystyle\Gamma\left(\frac{4}{5}\right)}$$
Now use the duplication formula
$$\Gamma(2 z) = \frac{1}{\sqrt{\pi}}\, 2^{2 z-1} \Gamma(z) \Gamma\left(z+\frac12\right)$$
or, with $z=2/5$:
$$\frac{\displaystyle\Gamma\left(\frac{2}{5}\right)}{\displaystyle\Gamma\left(\frac{4}{5}\right)} = \frac{\sqrt{\pi} \, 2^{1/5}}{\displaystyle\Gamma\left(\frac{9}{10}\right)}$$
Putting this all together, we get
$$\frac{\displaystyle\Gamma\left(\frac{1}{10}\right)}{\displaystyle\Gamma\left(\frac{2}{15}\right)\Gamma\left(\frac{7}{15}\right)} = \frac{\displaystyle\Gamma\left(\frac{1}{10}\right) \Gamma\left(\frac{9}{10}\right)}{\sqrt{\pi^3} \, 2^{6/5} \, 3^{1/10}}$$
And now, we may use the reflection formula:
$$\Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin{\pi z}}$$
With $z=1/10$, and noting that
$$\sin{\left(\frac{\pi}{10}\right)} = \frac{\sqrt{5}-1}{4} = \frac{1}{\sqrt{5}+1}$$
the stated result follows. This has been verified numerically in Wolfram|Alpha. |
H: Invariant subspaces of a linear operator that commutes with a projection
I have an assignment problem, the statement is:
Let $V$ be a vector space and $P:V \to V$ be a projection. That is, a linear operator with $P^2=P.$ We set $U:= \operatorname{im} P$ and $W:= \ker P.$ Further suppose that $T:V\to V$ is a linear operator such that $TP = PT.$ Prove that $TU \subseteq U$ and $TW\subseteq W.$
Here is my attempt:
Suppose $u\in U:= \operatorname{im} P$ so $u= P(v)$ for some $v\in V.$ Then $Tu = TPv = PTv \in \operatorname{im} P := U$ so $TU\subseteq U.$
Suppose $w\in W:= \ker P$ so that $Pw=0.$ Then $P (Tw) = T(Pw) = T(0)=0$ so $Tw\in \ker P := W$ so $TW\subseteq W.$
It seems fine to me, but nowhere did I use that $P$ was a projection, I only used $TP=PT.$ Is my proof okay?
AI: Yes, that's all. $P$ doesn't have to be projection for this particular exercise.
However, we can just start out from the fact that $P$ is a projection in a solution: now it projects to subspace $U$, in the direction of $W$, and we also have $U\oplus W=V$, and $P|_U={\rm id}_U$.
Having these, an operator $T$ commutes with $P$ iff both $U$ and $W$ are $T$-invariant subspaces. Your proof can be reformulated for one direction, and the other direction goes as:
If $TU\subseteq U$ and $TW\subseteq W$, then $TP(u+w)=Tu$, as it is $\in U$, it $=PTu$, and as $Tw\in W$, we finally have
$$TP(u+w)=Tu=PTu=PTu+PTw=PT(u+w)\,.$$ |
H: convolution of exponential distribution and uniform distribution
Given $X$ an exponentially distributed random variable with parameter $\lambda$ and $Y$ a uniformly distributed random variable between $-C$ and $C$. $X$ and $Y$ are independent. I'm supposed to calculate the distribution of $X + Y$ using convolution.
Does anyone have any tips on how to do this?
I understand that the convolution is represented by
$$\int_{-\infty}^{+\infty} f_1(x) \cdot f_2(z-x)dx\tag {1}$$
and so given $$f_1(x) = \begin{cases}
\lambda e^{-\lambda x} &, x\ge 0\\ 0 &, x < 0 \end{cases}$$
and $$f_2(y) = \begin{cases} \frac{1}{2C} &, y \in [-C,C]\\ 0 & , \text{otherwise}\end{cases}$$
(1) becomes: $$\int_{-\infty}^{+\infty} \lambda e^{-\lambda x}\cdot \frac{1}{2C} dx$$
but I don't know how to procede from here. How do I choose which intervals of $z$ to integrate?
AI: Your final integral is incorrect; where is $z$ - it needs to be in your integral limits?
It is probably easier to calculate
$$\int_{-\infty}^{+\infty}f_1(z-x)\cdot f_2(x)dx=
\begin{cases}
\int_{-C}^{+C}\lambda e^{-\lambda (z-x)}\cdot \frac{1}{2C} &, z \ge x\\
0 &, z\lt x
\end{cases}$$
On second thoughts - your way is better
$$\begin{align}\int_{-\infty}^{+\infty}f_1(x)\cdot f_2(z-x)dx&=
\begin{cases}
\int_{z+C}^{z-C}\lambda e^{-\lambda x}\cdot \frac{1}{2C} dx &,z \ge C \\
\int_{z+C}^0\lambda e^{-\lambda x}\cdot \frac{1}{2C} dx &,C \ge z \ge -C \\
0 &,z\lt -C
\end{cases}
\end{align}
$$ |
H: Convergence of a series of a given metric..
I'm trouble with a metric defined over a given set: Consider $\mathcal{P}=C_p^\infty([-\pi, \pi])$, that is, $\mathcal{P}$ is the set of all infinitely differentiable functions $f:\mathbb R\rightarrow \mathbb C$ which are $2\pi$-periodic ($f(x+2\pi)=f(x)$, for all $x\in \mathbb R$). We can introduce a metric in this set setting $$\displaystyle d(f, g)=\sum_{k=0}^\infty \frac{1}{2^k}\frac{\|f^{(k)}-g^{(k)}\|_\infty}{1+\|f^{(k)}-g^{(k)}\|_\infty},$$ where $\displaystyle \|f\|_\infty=\sup_{x\in J}|f(x)|$ where $J\subset \mathbb R$ is an interval of length $2\pi$. The problem is: why does the series above converge? Also, how can I prove $f_n\to f$ in $\mathcal{P}$ if and only if $\|f_n^{(k)}-f^{(k)}\|_\infty\to 0$ for all $k\in\mathbb N$?
AI: Each term in the series is bounded by $\frac{1}{2^k}$, since $\frac{x}{1+x}<1$ for all $x\ge 0$. Thus we have
$$\sum_{k=0}^\infty \frac{1}{2^k}\frac{\|f^{(k)}-g^{(k)}\|_\infty}{1+\|f^{(k)}-g^{(k)}\|_\infty}\leq \sum_{k=0}^\infty \frac{1}{2^k}=\frac{1}{1-\frac12}=2.$$
I suspect you wish to show that $f_n\to f$ in $\mathcal P$ if and only if $\|f_n^{(k)}-f^{(k)}\|_\infty\to 0$ for all $k$. One direction (that if $f_n\to f$ in $\mathcal P$, $\|f_n^{(k)}-f^{(k)}\|_\infty\to 0$ for all $k$) is easy, and I will leave to you. For the other direction, suppose $\|f_n^{(k)}-f^{(k)}\|_\infty\to 0$ for all $k$ and let $\epsilon>0$. Let $i$ be such that $\frac{1}{2^i}<\frac{\epsilon}{2}$ and $N$ be such that, for all $k\leq i$,
$$n\ge N\implies \|f_n^{(i)}-f^{(i)}\|_\infty < \frac{\epsilon}{2(i+1)}.$$
Then for $n\ge N$ we have
$$\begin{align}
d(f_n,f) &= \sum_{k=0}^\infty \frac{1}{2^k}\frac{\|f_n^{(k)}-f^{(k)}\|_\infty}{1+\|f_n^{(k)}-f^{(k)}\|_\infty}\\
&= \sum_{k=0}^i \frac{1}{2^k}\frac{\|f_n^{(k)}-f^{(k)}\|_\infty}{1+\|f_n^{(k)}-f^{(k)}\|_\infty}+\sum_{k=i+1}^\infty \frac{1}{2^k}\frac{\|f_n^{(k)}-f^{(k)}\|_\infty}{1+\|f_n^{(k)}-f^{(k)}\|_\infty}\\
&\leq \sum_{k=0}^i \frac{1}{2^k}\|f_n^{(k)}-f^{(k)}\|_\infty+\sum_{k=i+1}^\infty \frac{1}{2^k}\\
&\leq \sum_{k=0}^i \frac{1}{2^k}\frac{\epsilon}{2(i+1)}+\frac{1}{2^i}\\
&\leq \sum_{k=0}^i \frac{\epsilon}{2(i+1)}+\frac{\epsilon}{2}=\epsilon\\
\end{align}$$
hence $f_n\to f$ in $\mathcal P$. |
H: If $A$ is an $n \times n$ matrix such that $A^3 = O_{3}$, show that $I - A$ is invertible with inverse $I + A + A^2$
So this question is basically a proof.
If $A$ is an $n \times n$ matrix (so square) which satisfies the condition $A^3 = O_{3}$ ($A^{3}$ gives the $3 \times 3$ zero matrix), then show that $(I - A)$ is invertible with inverse $(I + A + A^2)$.
I have no idea where to start, all help welcome.
AI: Hint: for any invertible matrix $A$ with inverse $A^{-1}$, we have $AA^{-1} = I$. Try multiplying out $(I-A)(I+A+A^{2})$. You should get some cancellation and then use your condition to conclude something... |
H: Prove $p^2=p$ and $qp=0$
I am not really aware what's going on in this question. I appreciate your help.
Let $U$ be a vector space over a field $F$ and $p, q: U \rightarrow U$ linear maps. Assume $p+q = \text{id}_U$ and $pq=0$. Let $K=\ker(p)$ and $L=\ker(q)$.
Prove $p^2=p$ and $qp=0$.
$$p(p+q) = p(\text{id}_U) \Rightarrow p^2+pq=p \Rightarrow p^2 =p.$$ I actually first found this by letting $(\text{id}_U) =1$, although probably a wrong to do it.
Since $p^2 =p$ and $q$ is also a linear map defined by $q: U \rightarrow U$ then $q^2=q$. Again, I'm not sure if this actually is the right way to do it.
Then we have $$q(p+q) = q(\text{id}_U) \Rightarrow q^2+qp=q \Rightarrow q^2+ qp= q^2 \Rightarrow qp=0.$$
Prove $K=\text{im}(q)$.
For this question, I honestly do not know how to tackle it from the following definitions.
$K = \ker(p)=\left\{u \in U \ |\ p(u)=0_U\right\}$ and $\text{im}(q)=\left\{q(u) \ | \ u \in U \right\}$.
What's the correct way of doing these questions? Thank you for your time.
AI: Since $\mbox{id}_U$ is the unit of the ring $L(U)$, it is standard to denote it by $1$.
So you know that $p$ is idempotent ($p^2=p$). It follows automatically that $q=1-p$ is idempotent as $q^2=(1-p)^2=1^2-p-p+p^2=1-2p+p=1-p=q$. And then $qp=(1-p)p=p-p^2=0$ as well.
Such a pair $(p,q)$ is the $n=2$ case of what is called an orthogonal decomposition of the unit. When such a thing happens, we have the Peirce decomposition which is very handy for many purposes. In operator theory/algebras in particular.
For any given idempotent $p$, the range and the nullspace yield a direct sum decomposition of the vector space $V=\mbox{im} p\oplus \ker p$. Conversely, a direct sum decomposition of $V=I\oplus K$ defines a unique idempotent such that $\mbox{im} p=I$ and $\ker p=K$. Cf. remark below. Now let us answer your question.
The relation between the idempotents $p$ and $q=1-p$ is
$$
\mbox{im }(1-p)=\ker p\qquad \ker (1-p)=\mbox{im} \,p.
$$
Proof: by symmetry, it suffices to prove one of these two properties.
Let us do the lhs.
Since $0=p-p^2=p(1-p)$, we get $p(1-p)x=0$ for every $x$ whence $\mbox{im}(1-p)\subseteq \ker p$.
Now take $x\in \ker p$. Then $x=x-px=(1-p)x$ belongs to $\mbox{im} (1-p)$. QED.
Remark: it is also useful to remember that the idempotents are exactly the diagonalizable operators with spectrum in $\{0,1\}$. If $p$ is nontrivial ($p\ne 0,1$), we have $\ker p$ the eigenspace of $0$, and $\mbox{im} p=\ker(1-p)$ the eigenspace of $1$. That is $\mbox{im} p=\{x\in V\,;\,px=x\}$, as opposed to $\ker p=\{x\in V\,;\,px=0$. |
H: Let $A_{\alpha}$ be the $\alpha$-rotation matrix. Prove $A_{\alpha}^T = (A_{\alpha})^{-1}$
Let $A_{\alpha}$ be the alpha-rotation matrix. Prove $A_{\alpha}^T = (A_{\alpha})^{-1}$
In other words, prove $A_{\alpha}$ transpose = $A_{\alpha}$ inverse.
First of all, what is a rotation-matrix? And what does that imply about the $\alpha$-rotation matrix? What does that make $A_{\alpha}$?
AI: Hint: the rotation matrix is given by:
$$M = \begin{bmatrix}
\cos \alpha & -\sin \alpha\\
\sin \alpha&\cos \alpha
\end{bmatrix}$$
and its transpose by:
$$M^{T} = \begin{bmatrix}
\cos \alpha & \sin \alpha\\
-\sin \alpha&\cos \alpha
\end{bmatrix}$$
Now can you show $MM^{T} = I$? (This will imply $M^{T} = M^{-1}$! Also be sure to think about WHY this is the rotation matrix) |
H: Quick Integration Clarification
$ \int_ {-\pi}^{\pi}\ f(x) \ dx $ is equivalent to integrating $f(x)$ over $-\pi < x < \pi$ (strictly less than) and also $-\pi \le x \le \pi$ (less than or equal). So I just need to reassure myself that the bounds of integration don't change when considering either less than or less than or equal. A quick confirmation will do.
Cheers.
UPDATE:Just applying this to a question to do with fourier series: $\
f(x) = \begin{cases}x+π & -π<x\le 0 \\ π & 0 < x < π\\ \end{cases} $ with 2π periodicity.
Can I proceed to compute:
$a_0= {1\over 2L}[\displaystyle \int \limits_{-π}^{0} \ x+π \ dx + \int \limits_{0}^{π} π\ dx]$
AI: With respect to your edit, Yes, you can compute this particular integral (and evaluate "at" the bounds $I(x) \big|_b^a$, but keep in mind Jon Claus's answer: that what you are really doing is evaluating the limit of $I(x)$ as $x \to a^-, x\to b^+$, and that's crucial to remember in the case of improper integrals. |
H: Convergence of $1+\frac{1}{2}\frac{1}{3}+\frac{1\cdot 3}{2\cdot 4}\frac{1}{5}+\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6}\frac{1}{7}+\cdots$
Is it possible to test the convergence of $1+\dfrac{1}{2}\dfrac{1}{3}+\dfrac{1\cdot 3}{2\cdot 4}\dfrac{1}{5}+\dfrac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6}\dfrac{1}{7}+\cdots$ by Gauss test?
If I remove the first term I can see $\dfrac{u_n}{u_{n+1}}=\dfrac{(2n+2)(2n+3)}{(2n+1)^2}
\\=\dfrac{\left(1+\dfrac{1}{n}\right)\left(1+\dfrac{3}{2n}\right)}{\left(1+\dfrac{1}{2n}\right)^2}
\\={\left(1+\dfrac{1}{n}\right)\left(1+\dfrac{3}{2n}\right)}{\left(1+\dfrac{1}{2n}\right)^{-2}}\\=\left(1+\dfrac{5}{2n}+\dfrac{3}{2n^2}\right)\left(1-\dfrac{1}{n}\ldots\right)\\=1+\dfrac{3}{2n}+O\left(\dfrac{1}{n^2}\right)$
So the series is convergent.
Is it a correct attempt?
AI: I'm not sure about your result; the ratio test is inconclusive anyway. The way I look at this, I express each term as
$$b_k = \frac{a_k}{2 k+1}$$
where
$$a_k = \frac{1}{2^{2 k}} \binom{2 k}{k}$$
Using, e.g., Stirling's formula, you may show that
$$a_k \sim \frac{1}{\sqrt{\pi k}} \quad (k \to \infty)$$
so that the ratio test on the coefficients provides
$$\frac{b_{k+1}}{b_k} \sim \left(1+\frac{1}{k}\right)^{-1/2} \left (1+\frac{2}{2 k+1}\right) \sim 1+\frac{1}{2 k}\quad (k \to \infty)$$
The limit test is inconclusive, as the limit if the ratio is $1$. However, we may see that the series converges by the comparison test because the terms in the sum behave as
$$b_k \sim \frac{1}{2 \sqrt{\pi}} k^{-3/2} \quad (k \to \infty)$$
That said, we know that the series converges to $\pi/2$ as follows: consider the generating function
$$f(x) = \sum_{k=0}^{\infty} a_k \frac{x^{2 k+1}}{2 k+1}$$
Then for all $x$ within the radius of convergence of the series on the right, we may differentiate to get
$$f'(x) = \sum_{k=0}^{\infty} a_k \, x^{2 k} = \frac{1}{\sqrt{1-x^2}}$$
Therefore
$$f(x) = \arcsin{x}$$
and the series has value $f(1) = \pi/2$. |
H: Recovering a group from its quotient group
Suppose I've a group $G$ and a normal subgroup $H$. I know the structure of $G/H$ as well as the structure of $H$. Is it possible to recover the original structure of G from this?
AI: There is one important piece of data that you have omitted: namely, if $H$ is normal in $G$, then $G$ acts on $H$ by conjugation, and so there is a homomorphism
$G/H \to Out(H)$ (the group of outer automorphisms of $H$); without remembering this, there would be no chance of reconstructing $G$.
Now in general giving $G/H$, $H$, and a map from $G/H$ to $Out(H)$, there is
still no chance of reconstructing $G$. E.g. even if $G$ and $H$ are abelian,
so that the homomorphism $G/H \to Out(H)$ is trivial, Zev's answer gives
a counterexample.
However, if $G/H$ and $H$ are finite of coprime order, then in fact any extension
of $G/H$ by $H$ is necessarily a semidirect product, and so in this case your question (suitably modified by remembering the conjugation action) has a positive answer. This is the Schur--Zassenhaus theorem.
This is why counterexamples (such as in Zev's answer) tend to focus on $p$-groups. |
H: Convex homogeneous function
Prove (or disprove) that any CONVEX function $f$, with the property that $\forall \alpha\ge 0, f(\alpha x) \le \alpha f(x)$, is positively homogeneous; i.e. $\forall \alpha\ge 0, f(\alpha x) = \alpha f(x)$.
AI: Maybe I'm missing something, but it seems to me that you don't even need convexity. Given the property you stated, we have that, for $\alpha>0$,
$$f(x)=f(\alpha^{-1}\alpha x)\leq \alpha^{-1}f(\alpha x)$$
so that $\alpha f(x)\leq f(\alpha x)$ as well. Therefore, we have that $\alpha f(x)=f(\alpha x)$ for every $\alpha>0$. The equality for $\alpha=0$ follows by continuity of $f$ at zero (which is implied by convexity). |
H: For the sinusoidal graph below, write the equation in form $y = a\cos\left(\frac{2\pi}{p}x\right)+b$
For the sinusoidal graph below, write the equation
$$y = a\cos \left(\frac{2\pi}{p}x\right) + b.$$
The answer I solved it to be looking at the graph is
$$y = 25\cos\left(\frac{2\pi}{12}x\right) + 30$$
Thanks any input or help would be appreciated :D
AI: I don't think there's anything to add; your answer is correct. |
H: Proof of Fundamental Theorem of Finite Abelian Groups?
The only proofs I've seen of this tend to involve a few intermediate results and a couple of induction proofs with some clever constructions in them. They aren't hard to follow and they're pretty short, but they do still seem to remove some of the "why it should be true" aspect of it. The result seems blindingly obvious to me from a generator perspective: If a finite abelian group $G$ has generators $x,y,z...$ then any element can be represented by the form
$$g=x^ay^bz^c...$$
And there's an immediate and clear isomorphism from this to
$$\mathbb{Z}_{|x|}\times\mathbb{Z}_{|y|}\times\mathbb{Z}_{|z|}\times\cdots$$
Done. It can't help but be true. As far as I can see this is immediately extensible to finitely generated abelian groups as well. But the proofs I've seen, while short and simple enough, have never made its proof seem quite so clear.
Is there some major complication I'm missing here? I don't see anything fishy in assuming the existence of the generators, or proving the form of $g$, or actually constructing the isomorphism. But I assume I'm missing something because I've never seen the result proved like this, even though it's an obvious construction. Is the longer proof for pedagogical reasons (i.e. the lemmas and techniques are useful later), or am I missing a big hole in this proof?
AI: Consider the group $G=\mathbb Z/2\times\mathbb Z/3\times\mathbb Z/5$ and the elements $x=(1,1,0)$ and $y=(0,1,1)$, whose orders are $6$ and $15$. You can easily check that $x$ and $y$ generate $G$, but there surely is no isomorphism $G\cong\mathbb Z/6\times\mathbb Z/15$, for this last group has the wrong number of elements. |
H: What property allows me to integrate a gaussian function?
Whenever I integrate a gaussian function, I get to a step that makes me a little uncomfortable because I don't fully understand it. The only way I know of to analytically integrate the gaussian function is to multiply two of them together, like so...
$$\int_{- \infty}^{\infty} e^{-x^2}dx\int_{- \infty}^{\infty} e^{-y^2}dy = \int_{- \infty}^{\infty}\int_{- \infty}^{\infty}e^{-x^2-y^2}\,dx\,dy$$
My questions is, what allows me to group two different integrands together into one integrand? When is this technique not allowed? Thanks in advance.
AI: You are allowed to do this due to Fubini's theorem |
H: Let $A,B$ be elements of $M_2(\mathbb{R})$. Give an example to show that $A+B$ can be invertible if $A,B$ are both non-invertible
The goal for this problem is to show that even if two matrices $A$ and $B$ are non-invertible, $A+B$ can be invertible. I tried to show this using a proof, but I ended up actually proving that this isn't true. I thought that it might just not be true, but the question before it was to prove the contrapositive, if $A,B$ are both invertible, show, by example, that $A+B$ can be non-invertible. I was able to prove that one, but using the same method I hit the end of the road proving this problem. Here is a basic outline of my steps for the problem. Please show me where I went wrong.
1) $$A = \begin{bmatrix}
e & f\\
g & h
\end{bmatrix}$$ and
$$B = \begin{bmatrix}
i & j\\
k & l
\end{bmatrix}$$
2) $A+B=C$, so $$C = \begin{bmatrix}
e+i & f+j\\
g+k & h+l
\end{bmatrix}$$
After this, I say that the determinant of $C$ must not be 0 if it is to be invertible. If $A,B$ are non-invertible, then their determinants must be 0. Therefore,
det($A$): $eh-gf = 0$. Let $e=g=h=1$
$(1)(1) - (1)f = 0$, so $f$ must also be 1.
Therefore, $$A = \begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}$$
I then did the same thing for $B$.
det($B$): $il - kj = 0$. If $l=1$, $i=2$, and $j=1$, then $k$ must be 2 for $B$ to be non-invertible. However, now that
$$B = \begin{bmatrix}
2 & 1\\
2 & 1
\end{bmatrix}$$
$A+B=C$, where $$C = \begin{bmatrix}
3 & 2\\
3 & 2
\end{bmatrix}$$
Which is non-invertible as well.
Where did I go wrong?
AI: Let $A=\left(\begin{smallmatrix}1&0\\0&0\end{smallmatrix}\right)$ and $B=\left(\begin{smallmatrix}0&0\\0&1\end{smallmatrix}\right)$. Each is non-invertible (with determinant $0$), but their sum is the identity and is invertible. |
H: Why if the columns of $Q$ are an orthonormal basis for $F^n$ then $Q$ is unitary?
Why if the columns of $Q$ are an orthonormal basis for $F^n$ then $Q$ is unitary?
There exist any available theorems to prove it?
AI: Here's one way to see it. If the columns form an orthonormal basis, you can directly check that $QQ^\ast = Q^\ast Q = I$. So $Q^{-1} = Q^\ast$. Then we can compute what $Q$ does to norms using the inner product: $$\begin{align*} \|a\|^2 &= (a,a) \\
&= (Ia, a) \\
&= (Q^{-1}Qa,a) \\
&= (Q^\ast Qa,a) \\
&= (Qa, Qa) \\
&= \|Qa\|^2 \end{align*}$$ and so $Q$ preserves norms and is bijective since it has an inverse. |
H: Finding the rational values of constant for which these constants are roots of equation
Problem :
Determine all rational values for which $a,b,c$ are the roots of $x^3+ax^2+bx+c=0$
Solution :
Sum of the roots $a+b+c = -a$ ........(i) ( Since , as per question $a,b,c$ are roots of equation and we have to find values )
$\sum_{a,b,c} ab = b ........(ii)$
$abc= -c \Rightarrow ab = -1 \Rightarrow a =\frac{-1}{b} .....(iii)$
Solving (i),(ii) and (iii) simultaneously we get an equation which is biquadratic in "a" which is :
$2a^4-2a^2-a+1=0$ Here we can find that $a = 1$ is a factor of this equation and by long division of polynomial we can divide $a-1$ with the given biquadratic we get cubic polynomial in a which is $2a^3+2a^2-1$ ...
Please guide further and suggest whether this is correct or not.. Thanks...
AI: Neither $-1$ nor $1$ is a solution to your final cubic, so this calls for the use of the cubic formula.
$$a = 2, b = 2, c = 0, d = -1$$
$$\Delta_0 = 2^2 - 3(2)(0) = 4$$
$$\Delta_1 = 2(2^3) - 9(2)(2)(0) + 27(2^2)(-1) = 16 - 108 = -92$$
$$C = \sqrt[3]{\frac{-92 + \sqrt{(-92)^2 - 4(4^3)}}{2}} = \sqrt[3]{\frac{-92 + \sqrt{8208}}{2}} = \sqrt[3]{\frac{12\sqrt{57} - 92}{2}} = \sqrt[3]{6\sqrt{57} - 46}$$
We can divide $8208$ by $-27a^2$ (or $-108$) to get the equation's discriminant, $-76$. This means the equation has one real root and two complex roots. That real root is given by:
$$-\frac{1}{6}\left(2 + \sqrt[3]{6\sqrt{57} - 46} + \frac{4}{\sqrt[3]{6\sqrt{57} - 46}}\right)$$
A number which is definitely not rational. So you have found the only rational root, 1. |
H: Show the points $u,v,w$ are not collinear
Consider triples of points $u,v,w \in R^2$, which we may consider as single points $(u,v,w) \in R^6$. Show that for almost every $(u,v,w) \in R^6$, the points $u,v,w$ are not collinear.
I think I should use Sard's Theorem, simply because that is the only "almost every" statement in differential topology I've read so far. But I have no idea how to relate this to regular value etc, and to solve this problem.
Another Theorem related to this problem is Fubini Theorem (for measure zero):
Let $A$ be a closed subset of $R^n$ such that $A \cap V_c$ has measure zero in $V_c$ for all $c \in R^k$. Then $A$ has measure zero in $R^n$.
Thank you very much for your help!
AI: $u,v,$ and $w$ are collinear if and only if there is some $\lambda\in\mathbb{R}$ with $w=v+\lambda(v-u)$. We can thus define a smooth function
$$\begin{array}{rcl}f:\mathbb{R}^5&\longrightarrow&\mathbb{R}^6\\(u,v,\lambda)&\longmapsto&(u,v,v+\lambda(v-u))\end{array}$$
By the equivalence mentioned in the first sentence, the image of $f$ is exactly the points $(u,v,w)$ in $\mathbb{R}^6$ with $u,v,$ and $w$ collinear. Now, because $5<6$, every point in $\mathbb{R}^5$ is a critical point, so that the entire image of $f$ has measure $0$, by Sard's theorem. |
H: On the Frattini Subgroup
For a prime $p$, let $H=\{x\in \mathbb{C}\colon x^{p^n}=1 \mbox{ for some } n\geq 1\}$ be the Prufer $p$-group, $C_2=\langle y\colon y^2=1\rangle$, and $G=H\oplus C_2$. Then $H$ is the unique maximal subgroup of $G$, hence it is Frattini subgroup of $G$. Let $S=\{e^{2\pi i/p^n} \colon n\geq 1\}$. Then $S$ generates the subgroup $H\leq G$, and $S\cup \{y\}$ generate whole $G$. Now, as $H$ is Frattini subgroup of $G$, it is the set of non-generators of $G$. But as $S\cup \{y\}$ generate $G$, we must have $\{y\}$ generates $G$, I reached to a wrong conclusion. Can one clarify mistake in the arguments, if any?
AI: Your mistake is in your interpretation of "non-generators". A non-generator is an element that can be removed from any generating set and still leave a generating set. It is true that $S\cup\{y\}$ generates $G$, but given any $z\in S$, we may put $S':=S\setminus\{z\},$ and $S'\cup\{y\}$ will still generate $G$. We cannot remove all elements of $S$ from $S\cup\{y\}$ and still have a generating set for $G$. |
H: Why is the empty set finite?
On page 25 of Principles of Mathematical Analysis (ed. 3) by Rudin, there is the definition (excluding the irrelevant parts for this question):
Definition 2.4: For any positive integer $n$, let $J_n$ be the set whose elements are the integers $1,2,...,n$; let $J$ be the set consisting of all positive integers. For any set $A$, we say:
(a) $A$ is finite if $A \thicksim J_n$ for some $n$ (the empty set is also considered to be finite).
I am unsure why the empty set is considered to be finite. Given the defintion, for the empty set to be finite, then $\emptyset \thicksim J_n$ for some $n$.
The issue is for equivalence, there has to exist a one-to-one mapping of $\emptyset$ onto $J_n$. $\emptyset$ has no elements to correspond and since the definition requires $n \in J$, $J_n$ will always have at least one element.
Thus, my question: Why is the empty set considered finite?
AI: That definition is a (rare) example of Rudin doing things inefficiently. He could have defined $J_n$ for each non-negative integer $n$ to be the set of non-negative integers less than $n$, so that $J_0=\varnothing$, $J_1=\{0\}$, $J_2=\{0,1\}$, etc. Then he could have defined a set $A$ to be finite if and only if $A\sim J_n$ for some $n\in\Bbb N$ (where $\Bbb N$ includes $0$). This is essentially the usual set-theoretic definition stripped of some set-theoretic detail that would be out of place here. |
H: A cosine function has maximum value of 14 and a minmum value of 4, a period of 7, and a phase shift of 12.
A cosine function has a maximum value of 14 and a minimum value of 4, a period of 7, and a phase shift of 12. Write an equation representing this cosine function...
Could someone tell me if I'am write and if I'am wrong explain why and a solution my answer $y\,\,\, = \,\,\,14\cos 2\pi {{\left( {x\,\, - \,\,12} \right)} \over 7}\,\,\, + \,\,\,2$
AI: If the max value is $14$ and the min value is $4$, what is the midline value? What is the amplitude? Answering these questions will let you fix your errors--specifically, the $+2$ at the end and the $14$. The rest looks good. |
H: Solving for the volume of a tetrahedron
Can anyone explain why I was incorrect in this problem:
I used a tetrahedron solver online, http://rechneronline.de/pi/tetrahedron.php, and it says I have the right answer.
AI: Assuming that $S$ denotes the length of a side, your answer is correct. See here. |
H: Math problem using cups..
I have the following question:
Mike has 58 white cups and 198 green cups. He wants to place his cups in stacks by color so there are the same number in each stack and same color.
What is the greatest number of cups he can place in each stack?
How would I answer this guys?
AI: Edit: It seems that you want a given stack to consist of only one color of cup, and for each stack to have the same number of cups. (This is a slightly more interesting problem, now.) We have to divide all the white cups up evenly and make the stack size as big as possible, so the stack size needs to be as large a divisor of $58$ as we can manage. Likewise, the stack size needs to be as large a divisor of $198$ as we can manage. Thus, the stack size needs to be the greatest common divisor of $58$ and $198$--that is, $2$. |
H: Continuity of $f(x)=[x]+ \sqrt{x-[x]}$
Consider the function $f(x)=[x]+ \sqrt{x-[x]}, \, x\in \Bbb R$ ; where "$[ \space ]$" denotes the greatest integer function. It is obvious that if $b$ is an integer, then
$$\lim_{x\to b-} f(x)=\lim_{x\to b-}[x] + \sqrt { \lim_{x\to b-}x - \lim_{x\to b-}[x] } =b-1+\sqrt {b-(b-1)}=b-1+1=b$$
$$\lim_{x\to b+} f(x)=\lim_{x\to b+}[x] + \sqrt { \lim_{x\to b+}x - \lim_{x\to b+}[x] } =b+\sqrt {b-b}=b,$$ and
$$f(b)=[b]+ \sqrt{b-[b]}=b+\sqrt {b-b}=b,$$ hence $f(x)$ is continuous for all integer values of $x$. Is $f(x)$ continuous for all non-integer values of $x$ also, i.e. is it true that $f(x)$ is continuous for all $ x \in \Bbb R$ ?
AI: Write $f(x)$ as
$$ f(x) = x + g(x) \quad \text{where} \quad g(x) = \sqrt{x - [x]} - (x - [x]). $$
It is clear that $g(x) = \sqrt{x} - x$ on $[0, 1)$ and $g(x+1) = g(x)$.
From this observation it is easy to prove that $g$ is continuous on $[0, 1]$ (the point $x = 1$ only matters), and then this continuity extends to $\Bbb{R}$ by periodicity of $g$. Therefore $g$ is continuous and hence the same is true for $f$. |
H: Multipliciousness within an inner product space.
Question:
Let $V$ be an inner product space and $v,w\in V$. Prove that $\lvert\langle v,w\rangle\rvert=\lVert v\rVert \lVert w\rVert$ if and only if one of the vectors $v$ or $w$ is a multiple of the other.
Attempt:
Assume the identity holds and $y\neq 0$. Let
$$ a=\frac{\langle x,y\rangle }{\lVert y \rVert^2}, $$
and let
$$ z=x-ay. $$
Now, note that $y$ and $z$ are orthogonal because if $z=x-ay$, then by the definition of $a$ we have
$$a=\frac{\langle x,y\rangle }{\lVert y \rVert^2}=\frac{\langle z+ay, y\rangle }{\langle y,y\rangle }=\frac{\langle z,y\rangle }{\langle y,y\rangle }+a\frac{\langle y,y\rangle }{\langle y,y\rangle }=\frac{\langle z,y\rangle }{\langle y,y\rangle }+a,$$
which means that
$$ \frac{\langle z,y\rangle }{\langle y,y\rangle }=0~\text{or}~ \langle z,y\rangle =0,$$
namely $y$ and $z$ are orthogonal. Furthermore, since $\lvert \langle x, y \rangle \rvert= \lVert x \rVert \lVert y \rVert$, then
$$ \frac{\lvert \langle x, y \rangle \rvert}{\lVert y \rVert} =\lVert x \rVert ~\overset{1/\lVert y \rVert}{\implies}~\frac{\lvert \langle x, y \rangle \rvert}{\lVert y \rVert^2} =\frac{\lVert x \rVert}{\lVert y \rVert} =\lvert a \rvert.$$
Lemma: Let $V$ be an inner product space, and suppose that $x$ and $y$ are orthogonal vectors in $V$. Then $\lVert x+y \rVert^2 = \lVert x \rVert^2 + \lVert y \rVert^2$.
Now, we know that
$$ \lVert x \rVert^2 = \lVert ay+z \rVert^2,$$
but, by the lemma, this means that
$$ \lVert ay+z \rVert^2 = \lVert ay \rVert^2 + \lVert z \rVert^2~~\overset{\text{not sure where to go}}{\dots}$$
Note:
I'm trying to follow along with Friedberg's description on pages 337 and 338 in his Linear Algebra:
EDIT$^1$:
OK, so I think I've got an idea. To complete the proof I need to notice that $\frac{\lVert x \rVert}{\lVert y \rVert} =\lvert a \rvert$, which means that $\lVert x \rVert = \lvert a \rvert\lVert y \rVert$ and more importantly that $\lVert x \rVert^2 = \lvert a \rvert^2\lVert y \rVert^2,$ but this is exactly the $\lVert ay\rVert^2$ from the application of the lemma as $\lVert ay\rVert^2=(\lVert ay\rVert)^2=(\lvert a\rvert\lVert y\rVert)^2=\lvert a\rvert^2\lVert y\rVert^2$. This will give the result.
CONCLUSION:
Here is my end proof:
EDIT$^2$:
I found here in E. B. Vinberg's A Course in Linear Algebra a different flavor of this proof (if anybody happens to care):
AI: Notice that:
$$ \begin{align*}
\lVert x \rVert^2 &= \lVert ay+z \rVert^2 \\
\lVert x \rVert^2 &= \lVert ay \rVert^2 + \lVert z \rVert^2 \\
\lVert x \rVert^2 &= (|a|\lVert y \rVert)^2 + \lVert z \rVert^2 \\
\lVert x \rVert^2 &= \left( \dfrac{\lVert x\rVert}{\lVert y\rVert}\lVert y \rVert \right)^2 + \lVert z \rVert^2 \\
\lVert x \rVert^2 &= \lVert x \rVert^2 + \lVert z \rVert^2 \\
0 &= \lVert z \rVert^2 \\
\lVert z \rVert &= 0 \\
z &= 0 \qquad \text{ (the zero vector)}
\end{align*} $$
Thus, we have $0=z=x-ay \iff x=ay$ so that $x$ and $y$ are scalar multiples of each other, as desired.
It remains to prove the converse. Suppose $x=ay$ for some scalar $a$. Then we have:
$$\lvert\langle x,y\rangle\rvert
=\lvert\langle ay,y\rangle\rvert
=\lvert a\langle y,y\rangle\rvert
=\lvert a \left( \lVert y\rVert \lVert y\rVert \right) \rvert
=\lvert a \rvert \left( \lVert y\rVert \lVert y\rVert \right)
=\lVert ay\rVert \lVert y\rVert
=\lVert x\rVert \lVert y\rVert$$
as desired. |
H: Existence of a function satisfying given conditions
I was going through the topic of $Function$, its boundedness, continuity etc. I got a problem.
Does there exist a function defined on the closed interval $[a,b]$ which is....
1. bounded;
2. takes its maximum and minimum values;
3. takes all its values between the maximum and minimum values;
Then can we conclude that then this function is continuous at some points or subintervals on
$[a,b]$.
AI: Function below satisfies all three conditions above but it is
discontinuous at every point on $[-1,1]$
$f(x)=\begin{cases}
1,&\text{if $x = 0$}\\
x,&\text{if x is rational, $x \neq 0$ , $x\neq 1$}\\
-x,&\text{if x is rational, $x \neq 0$ , $x\neq 1$, $x\neq -1$} \\
0,&\text{if x = 1}
\end{cases}$
It is impossible to draw the graph of the function $y = f(x)$ but the sketch below gives an idea of its behavior.
Hence answer is no.
You can find this example in counter examples in calculus. |
H: (Simple) Examples on Non-Commutative Rings
Looks like it is easier to find example of commutative rings rather than
non-commutative rings. Prabably the easiest examples of the former are
$\mathbb{Z}$ and $\mathbb{Z}_n$. We can find elaborations on these two commutative rings in various
literatures including here and here. These are quite simple and easy to comprehend.
However, the examples on simple (non-commutative kind) are not that easy.
One example is found here and it has been mentioned as "one of the simplest examples of a non-commutative ring". This is also on the easier side.
EDIT: The following two are also commutative rings.
Other examples are given in this enumeration. But as you can see, examples like Gaussian integers or Eisenstein integers are difficult for starters to comprehend.
Do you think you can give one or two simple examples on non-commutative rings, based on every day numbers?
If it is that difficult, perhaps some insight comments why this is difficult would be welcome.
AI: How about the ring of quaternions? This is an example of a non-commutative ring and I don't think it's super difficult to wrap your head around. Understanding the ring of quaternions is also in fact very useful in understanding representations of rotations and groups of isometries for polyhedra. |
H: Calculus the extreme value of the function $f(x,y)$
Calculus the extreme value of the $f(x,y)=x^{2}+y^{2}+xy+\dfrac{1}{x}+\dfrac{1}{y}$
pleasee help me.
AI: Use the normal derivative test. First order conditions
$$\frac{\partial f}{\partial x}=2x+y-\frac 1{x^2}=0$$
$$\frac{\partial f}{\partial y}=2y+x-\frac 1{y^2}=0$$
If $x,y\in \mathbb R$ the the solution is $x=y=\frac 1{3^{1/3}}$ |
H: How to prove $\lvert \lVert x \rVert - \lVert y \rVert \rvert \overset{\heartsuit}{\leq} \lVert x-y \rVert$?
I'm trying to show that $\lvert \lVert x \rVert - \lVert y \rVert \rvert \overset{\heartsuit}{\leq} \lVert x-y \rVert$. A hint would be nice.
AI: Observe that
$\lVert x \rVert = \lVert (x -y) +y \rVert \leq \lVert (x -y) \rVert + \lVert y \rVert$
which gives
$\lVert x \rVert - \lVert y \rVert \leq \lVert x -y \rVert$ ... $(1)$
Further,
$-(\lVert x \rVert - \lVert y \rVert ) \leq \lVert (y -x) \rVert = \lVert (x -y) \rVert $... $(2)$
From $(1)$ and $(2)$ result follows. |
H: Doubt over a term in the definition
I was studying a book Handbook of Product Graphs by Richard Hammack, Wilfried Imrich, Sandi Klavžar. There is a definition of Antipodal Graph.
A graph $G$ is called antipodal if there exists a vertex $v$ to any vertex $u$ $\in$ $V(G)$ such that $V(G) = I(u,v)$.
What does $I(u,v)$ here stands for.
What I know about the antipodal graph of a graph $G$ is that its a graph containing vertices of $G$ and those vertices are adjacent in antipodal graph whose distance equals diameter of $G$.
Thanks a lot for help.
AI: It’s defined near the top of page $9$, according to the Look Inside feature at Amazon: $I(u,v)$, the interval between $u$ and $v$, is the set of vertices lying on shortest paths between $u$ and $v$, including the vertices $u$ and $v$ themselves. For example, an even cycle $C_{2n}$ is antipodal, because if $u$ is any vertex, we may take $v$ to be the diametrically opposite vertex at distance $n$ from $u$, and $I(u,v)$ will then be the whole graph: the two paths from $u$ to $v$ are both of minimum length, and every vertex of $C_{2n}$ lies on one of them. |
H: Prove this vector identity
Let $f$ and $g$: $\mathbb R^3 \rightarrow \mathbb R$ be $C^{1}$ scalar functions. Prove that
$$ \nabla \left( \frac{f}{g} \right) = \frac{1}{g^2}\left( g\nabla f - f\nabla g \right)$$ $$g \neq 0$$
AI: I will assume $f, g : \mathbb{R}^3 \to \mathbb{R}$.
We have
$$\nabla\bigg(\frac{f}{g}\bigg) = \bigg\langle \frac{\partial f/g}{\partial x},\frac{\partial f/g}{\partial y}, \frac{\partial f/g}{\partial z} \bigg \rangle $$
By the quotient rule, the above becomes
$$\bigg \langle \bigg(\frac{\partial f}{\partial x} g - \frac{\partial g}{\partial x}f\bigg)\frac{1}{g^2},\bigg(\frac{\partial f}{\partial y} g - \frac{\partial g}{\partial y}f\bigg)\frac{1}{g^2},\bigg(\frac{\partial f}{\partial z} g - \frac{\partial g}{\partial z}f\bigg)\frac{1}{g^2} \bigg \rangle $$
which is exactly
$$\frac{1}{g^2}(g\nabla f - f\nabla g)$$ |
H: How does the symmetric group act on tuples?
Given a set $X$, I've seen people write the action of a permutation $ \sigma \in S_{n}$ on an $n$-tuple of elements of $X$ as
$$ \;\; \sigma (x_{1},...,x_{n})=(x_{\sigma^{-1}(1)},...,x_{\sigma^{-1}(n)}). \;\;$$
But this does not seem to me to give a left action of $S_{n}$ on $n$-tuples, since
$$\sigma(\tau(x_{1},...,x_{n}))=\sigma(x_{\tau^{-1}(1)},...,x_{\tau^{-1}(n)})=(x_{\sigma^{-1}\tau^{-1}(1)},...,x_{\sigma^{-1}\tau^{-1}(n)})$$
is not in general equal to
$$(\sigma \tau)(x_{1},...,x_{n})=(x_{(\sigma \tau)^{-1}(1)},...,x_{(\sigma \tau)^{-1}(n)})=(x_{\tau^{-1}\sigma^{-1}(1)},...,x_{\tau^{-1}\sigma^{-1}(n)}).$$
On the other hand, if we don't write out symbols, but think of a tuple as a map from
the $n$-element set $\{1,...,n\}$ to $X$, then the first formula seems to make sense:
the $\sigma^{-1}$ comes from precomposing with the right action of $S_{n}$ on the set $\{1,...,n\}$ determined by the left action of $S_{n}$ on $\{1,...,n\}$, where we use the convention that functions act on the left of elements.
So the question is whether the first formula defines a left or a right action on tuples.
The symbols seem to suggest a right action while interpreting tuples as maps seems to suggest a left action. I think that in my formulas above I must be getting confused.
AI: The first formula defines a left action on tuples. Note that $\sigma \in S_n$ acts on the positions, not on the indices. You did wrong in computing $\sigma\bigl(\tau(x_1, \ldots, x_n)\bigr)$.
We have $\tau(x_1, \ldots, x_n) = (x_{\tau^{-1}(1)}, \ldots, x_{\tau^{-1}(n)})$ by definition. Now $\sigma$ acts from the left on this tuple. Acting on a tuple, $\sigma$ "moves" the entry in position $\sigma^{-1}(1)$ to the front, right? But which $x_j$ is that? If we write $y_i = x_{\tau^{-1}(i)}$ we have
$$ \sigma(x_{\tau^{-1}(1)}, \ldots, x_{\tau^{-1}(n)}) = \sigma(y_1, \ldots, y_n)
= (y_{\sigma^{-1}(1)}, \ldots, y_{\sigma^{-1}(n)}) $$
So the new first entry is $y_{\sigma^{-1}(1)} = x_{\tau^{-1}\sigma^{-1}(1)}$, giving
$$ \sigma\bigl(\tau(x_1, \ldots, x_n)\bigr) = (x_{\tau^{-1}\sigma^{-1}(1)}, \ldots, x_{\tau^{-1}\sigma^{-1}(n)}) = (x_{(\sigma\tau)^{-1}(1)}, \ldots, x_{(\sigma\tau)^{-1}(n)}). $$ |
H: Is (im)predicativity decidable
The distinction between predicative and impredicative definitions is
important in mathematics. As first approximation, impredicativity
means circularity. Let me give you an example of an impredicative
definition.
Let $V$ be a vector-space over a field $K$, and $S \subseteq V$ a set
of vectors. The ${span}$ of $S$ is the intersection of all
sub-vector-spaces $V'$ of $V$ that also contain $S$.
$$
\operatorname{span}(S) = \bigcap \{V'\ |\ S \subseteq V', V'\ \text{is a sub-vector-space of}\ V\}
$$
In a set theory like ZF(C), this definition is impredicative
because $\operatorname{span}(S)$ is itself a member of $\{V'\ |\ S \subseteq V',\
\text{is a sub-vector-space of}\ V\}$. In some sense this definition
is circular. In this particular case, we can easily get around this
impredicativity, for example by defining
$$
\operatorname{span}(S) = \{\Sigma _{i=1}^{n} k_i.v_i\ |\ n \geq 0, k_i \in K, v_i \in S\}
$$
but it's not always that easy. For example in ZF(C) the natural numbers are often defined as follows.
$$
\mathbb{N} = \bigcap \{S \ |\ S\ \text{is an inductive set}\}
$$
where $S$ set is inductive if it contains $0$ and is closed
under successor. Clearly, $\mathbb{N}$ is itself inductive.
Such circularities are not considered problematic in classical
mathematics, in the sense that no contradictions have ever been
derived from such impredicative definitions. Nevertheless,
impredicate definitions don't always sit well with constructive
mathematics. This leads to my question: is it always easy to see if a
definition is (im)predicative? More precisely:
Is it decidable if a formula $F$ is predicative in a theory $T$?
I'm most interested in the case where the theory $T$ is some set theory.
Note that I have not formally defined (im)predicativity. Such a
definition itself appears to be difficult, but I would be happy to
hear about answers to my question for any of the extant formal
or informal notions of (im)predicativity.
AI: 1] The usual characterization of an impredicative definition is that it defines some object (property, relation, function, etc.) by means of a quantification over some domain which, if the definition succeeds, contains that object (property, relation, function, etc.). Here quantifications will be taken to include Russellian definite descriptions, as when we define an object as the unique object such that ...
In the context of a formalized theory with typed variables, we typically implement a ban on impredicative definitions by banning definitions of things of type $t$ via formulas that involve quantifiers of type $t$. Thus, predicative second-order arithmetic is characterised by only allowing instances of the comprehension schema defining a numerical property [or set of numbers] to contain first-order quantifiers over numbers (and not second-order quantifiers over properties [sets of numbers]). And in such a context it is readily decidable by inspection whether a formula is predicative and can feature in the comprehension scheme or other kind of definition.
2] But, as the OP hints, it can be interesting to ask when an impredicative definition has a co-extensive predicative counterpart. There's surely not usually going to be a decidable routine to determine that even within a fixed formalized theory (if only because we can't usually effectively determine co-extensionality).
3] As a footnote, pace Russell, it isn't particularly helpful to think of impredicativity as a species of circularity. To use Ramsey's example, if I pick out Jane as the tallest woman in the room, that's impredicative (I pick her out by a quantification over a totality including her); but in what sense is that circular? |
H: Calculate Taylor Series function and converges
Calculate the Taylor series of the function $f(x) = 3/x$ about $a = 3$.
Where (if anywhere) does the series converge to $3/x$?
AI: To calculate the Taylor series you need to find the derivatives of the function and evaluate them in the point a=3:
$$\frac{d^nf}{dx^n}=3(-1)^n(n!)x^{-(n+1)}$$
The series is:
$$\sum_{n=0}^{\infty} \frac{(-1)^n}{3^n}(x-3)^n$$
The series converges, if you want to look for uniform convergence you can use the Cauchy-Hadamard criteria but in this case is simple to show that it converges for $x \neq 0$ because that's the only point in which the solution becomes unbounded. |
H: Counting strings of $\{ 0,1 \}$'s of length $n$ s.t. last $k$ bits are equal
I'm trying to derive a closed formula for the number of strings of $0$'s and $1$'s of length $n$ such that the last $k \leq n$ bits are all zeros or all ones, and such that there is no other place in the string with $k$ consecutive zeros or ones.
I have tried constructing a recursive formula by "reasoning backward", but the formula is still a mystery.
AI: The question is equivalent to asking how many strings of length $n-k$ do not contain any string of $k$ consecutive zeros or ones. Half of these will end in $0$ and require appending $1^k$ to make the desired string of length $n$, and the other half will end in $1$ and require appending $0^k$. Thus, it suffices to find $f(m,k)$, the number of strings of length $m$ that do not contain any string of $k$ consecutive zeros or ones.
Such a string can be thought of as a composition of $m$ into parts of size less than $k$, the parts being the blocks of consecutive zeros or consecutive ones. Each composition is represented twice, however, once with a string starting with a block of zeros and once with a string starting with a block of ones. E.g., the composition $3+1+2$ of $6$ is represented by $000100$ and by $111011$. Thus, $f(m,k)$ is the twice the number of compositions of $m$ into parts of size less than $k$.
If define the $\ell$-nacci numbers $a_i^{(\ell)}$ by $a_0^{(\ell)}=a_1^{(\ell)}=\ldots=a_{\ell-2}^{(\ell)}0$, $a_{\ell-1}^{(\ell)}=1$, and
$$a_i^{(\ell)}=a_{i-1}^{(\ell)}+\ldots+a_{i-\ell}^{(\ell)}\tag{1}$$
for $i\ge\ell$, it turns out that $f(m,k)=2a_{m+k-2}^{(k-1)}$, which is easily computed from the recurrence $(1)$.
When $k=2$, for instance, $a_i^{(1)}=1$ for all $i$, and $f(m,2)=2$: the only acceptable sequences are the two alternating sequences of length $m$. When $k=3$, $a_i^{(2)}=F_i$, the $i$-th Fibonacci number, and $f(m,3)=2F_{m+1}$; in this case Binet’s formula gives a closed form, and OEIS A000073 has a really ugly closed form for the $k=4$ case, but for $k>4$ there’s nothing much better than generating functions.
For more information and references see OEIS A048887 and its references; the specific case $k=5$ is OEIS A000078. |
H: Given $G = (V,E)$, a planar, connected graph with cycles, Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$. $s$ is the length of smallest cycle
Given $G = (V,E)$, a planar, connected graph with cycles, where the smallest simple cycle is of length $s$. Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$.
The first thing I thought about was Euler's Formula where $v - e + f = 2$. But I really could not connect $v$, $e$ or $f$ to the fact that we have a cycle with minimum length $s$.
Any direction will be appreciated, thanks!
AI: This is a question of Grimaldi, Discrete and Combinatorial Mathematics, chapter 11 section 4 exercise 23,a.
You may read the last few pages of this section, if you still do not see I can give you the hint.
Good luck
The responce for the last comment:
Each edge is in the boundary of at most 2 faces. Then if you consider the sum of lengths of all faces in G, or regions, the total lenght is always less equal to 2$e$. And obviously, since $s$ is the smallest lenght, the sum of lengths is greater and equal to $sr$. Finally we have this: $2e \geq \sum deg(r_i) \geq sr$ where $deg(r_i)$ is the number of edges in the boundary of face. |
H: Showing $\mathrm{Var}(aX+bY) = a^2\mathrm{Var}(X)+b^2\mathrm{Var}(Y)+2ab\mathrm{Cov}(X,Y)$
I am trying to prove this equation $$\mathrm{Var}(aX+bY) = a^2\mathrm{Var}(X)+b^2\mathrm{Var}(Y)+2ab\mathrm{Cov}(X,Y),$$
where $\mathrm{Cov}(X,Y):= E(XY)-EX\cdot EX$ and $E[\cdot]$ denotes the mean of a random variable. Also $\mathrm{Var}(X)$ is defined like this: $$\mathrm{Var}(X)=E(X^2)-[E(X)]^2$$ which is the variance of the random variable $X$.
I started this way:
$$\mathrm{Var}(aX+bY) = E[((aX+bY)-E(aX+bY))^2] = E[(aX-bY-aEX-bEY)^2] = $$
Please help me showing that this equals $a^2\mathrm{Var}(X)+b^2\mathrm{Var}(Y)+2ab\mathrm{Cov}(X,Y)$.
AI: The first equality in your proof is wrong - you're not using your definition of $\mathrm{Var}(X)$ correctly. Instead it should go something along the lines of:
$$
\begin{align}
\mathrm{Var}(aX+bY)&={\rm {\rm E}}[(aX+bY)^2]-\left({\rm {\rm E}}[aX+bY]\right)^2\\
&={\rm E}[a^2X^2+b^2Y^2+2abXY]-\left(a{\rm E}[X]+b{\rm E}[Y]\right)^2\\
&=a^2{\rm E}[X^2]+b^2{\rm E}[Y^2]+2ab{\rm E}[XY]-a^2{\rm E}[X]^2-b^2{\rm E}[Y]^2-2ab{\rm E}[X]{\rm E}[Y].
\end{align}
$$
I'm sure you can take it from here. |
H: Prove that the equation $3x-\sin(2x)=0$ has only one solution
I`m trying to prove that this equation has only one solution$$3x-\sin(2x)=0$$
I need some advice how to do that, Thanks!
EDIT
AI: By intermediate value theorem it clearly has at least one solution (or just check by plugging in $x=0$). Now the derivative of $3x-\sin(2x)$ is $3-2\cos(2x)>0$, hence the function is strictly increasing, hence it can cross the x-axis only once, ergo only one solution. |
H: Matrix representation of the adjoint of an operator, the same as the complex conjugate of the transpose of that operator?
Since I'm not taking summer classes I decided to do some self learning on more advanced mathematics, and I've found myself stuck on this problem:
I have to show that for any operator $\hat{A}$ the matrix representation of the adjoint $\hat{A}\dagger$ is given as the complex conjugate of the transpose of the matrix representation of the operator $\hat{A}$.
So basically show that
$\underline{A}\dagger = (\underline{A^T})^*$
I've done many examples on the textbook using actual matrices, but I don't really know how to prove this in terms of operators, any hint or where to start, or some guidance?
Thank you
AI: Let $V,W$ be complex, finite dimenstional Hilbert spaces, $A \colon V \to W$ a linear operator. Let $(e_1, \ldots, e_n)$ and $(f_1, \ldots, f_m)$ be orthonormal bases of $V$ and $W$ respectively. If then $(\alpha_{ij})$ is the matrix representation of $A$ with respect to these bases, we have
$$ Ae_i = \sum_j \alpha_{ij}f_j, \quad 1 \le i \le n $$
Or, as $(f_j)$ is orthonormal
$$ (Ae_i, f_j) = \alpha_{ij}, \quad 1 \le i \le n, 1 \le j \le m $$
Let $(\beta_{ij})$ be the matrix representation of $A^\dagger$, we have then, as above
$$ A^\dagger f_i = \sum_i \beta_{ij}e_j $$
or $$ (A^\dagger f_i, e_j) = \beta_{ij}, \text{ each $i,j$} $$
We get, by definition of the adjoint, that
\begin{align*}
\beta_{ij} &= (A^\dagger f_i, e_j)\\
&= (f_i, Ae_j)\\
&= \overline{(Ae_j, f_i)}\\
&= \overline{\alpha_{ji}}
\end{align*}
So $(\beta_{ij})_{i,j} = (\bar \alpha_{ji})_{i,j}$, as wanted. |
H: Closed formula to count number of binary numbers of length $x$ having at least $y$ $1$ bits
I'm interested in solving a sub problem of the algorithm related question from SO How many binary numbers having given constraints .... The sub problem being, having $x \geq y$
determine how many binary numbers of length $x$ have at least $y$ $1$ bits
The immediate algorithm is to go over all $2^x$ binary numbers and count the ones having at least $y$ $1$ bits. Time complexity of such algorithm is $O(2^x)$ or even $O(x2^x)$ since for each number, one has to count the $1$ inside using a loop through all bits (actually $x/2$ on average, but this is not the point).
My target is ideally to find a $O(1)$ algorithm, i.e. a closed formula $f$ such that $\textrm{numbers} = f(x,y)$.
I started with some observations. For $x = 5$, with x being either $0$ or $1$:
y = 1 y = 2 y = 3
1xxxx (2^4) 11xxx 111xx
01xxx (2^3) 101xx (...)
001xx (2^2) 011xx Σ = 1x2^2+3x2^1+6x2^0
0001x (2^1) 1001x
00001 (2^0) 0101x
0011x
Σ = 2^5 - 1 10001
01001
00101
00011
Σ = 1x2^3+2x2^2+3x2^1+4x2^0
The number of terms is $C_x^y$ and from there it is possible to improve the initial algorithm into parsing only $C_x^y$ elements instead of $2^x$, and as a developer that could make me happy, but, and this is the question,
is there a closed formula that gives directly the number of binary numbers of length $x$ having at least $y$ $1$ bits?
AI: The required value should be the following:$$\sum_{i=0}^{x-y}\binom{x}{y+i}$$
This is because you need to select $y, (y+1), ..., x$ positions from the total $x$ positions to place $1$'s, and the rest zeroes.
This expression is just a sum of the last $y$ binomial coefficients. As far as I know, there is no general expression for the sum of $k$ binomial coefficients to give you an $O(1)$ answer.
You may find this link helpful for finding a efficient algorithm or a good bound on the sum. |
H: Evaluate $ f(a) = \int^a_{-a}|x|dx$ of $a=2$
I want to evaluate this integral $$ f(a) = \int^a_{-a}|x|dx$$between $a$ and $-a$ and I know that $\int^a_{-a}|x|dx$ is divided into two: $$\int^a_{-a}|x|dx \rightarrow a>0 = \int^a_{0}xdx $$
$$\int^a_{-a}|x|dx \rightarrow a<0 = \int^0_{-a}-xdx $$
after I evaluate it I get zero, so I dont know if I`m right.
What I did is :
$$a<0 \rightarrow \int^0_{-2}-xdx = \frac{-x^2}{2}|^{0}_{-2} = -2 $$
$$a>0 \rightarrow \int^2_{0}xdx = \frac{x^2}{2}|^{2}_{0} = 2 $$
Thanks!
AI: You've got the sign of the first integral wrong:
$$\int^0_{-2}-xdx = \frac{-x^2}{2}|^{0}_{-2} = 0 - \frac{-(-2)^2}{2}=2$$ |
H: Approximation of an integral
)
I can't get over a step in my teacher's exercise.
$$I(x) = 2\int_{0}^{1} \frac{y^3 + 4y^2}{y^2+4y+5} dy = 2\int_{0}^{1} y dy -5\int_{0}^{1} \frac{2y}{y^2+4y+5} dy$$
Probably it's very easy..
Thank you :)
AI: Observe that:
$$\begin{align*}
2\int_{0}^{1} \frac{y^3 + 4y^2}{y^2+4y+5} dy
&= 2\int_{0}^{1} \frac{y(y^2+4y+5)−5y}{y^2+4y+5} dy \\
&= 2\int_{0}^{1} \left( \frac{y(y^2+4y+5)}{y^2+4y+5} - \frac{5y}{y^2+4y+5}\right) dy \\
&= 2\int_{0}^{1} \left( y - \frac{5y}{y^2+4y+5}\right) dy \\
&= 2\left(\int_{0}^{1} y~dy - \int_{0}^{1}\frac{5y}{y^2+4y+5}dy \right) \\
&= 2\int_{0}^{1} y~dy - 2\int_{0}^{1}\frac{5y}{y^2+4y+5}dy \\
&= 2\int_{0}^{1} y~dy -5\int_{0}^{1} \frac{2y}{y^2+4y+5} dy
\end{align*}$$ |
H: Questions about the concept of Structure, Model and Formal Language
When we start to define mathematical logic (specifically, propositional, first order, and second order logic) we start defining the concept of a language. At the begining this is done in a purely syntactical way. So far, so good.
Now, in this formalization we have been using what is called "metalogic" and "metamathematics" (we use sets, funcions, recursion, the process of reasoning, etc.). Then we want to attach the meaning of "truth".
(*)In the language of propositional logic we do this by means of the truth tables and the rules of deduction. In the case of first and second order logic we use the concept of "structure" (An actual set where the funcions and relations interact) and also the rules of deduction with quantifiers. Then it comes the concept of "model" and my problem.
As I understand, a model is a structure in wich a system of axioms, expressed in a given language, are satisfied.
Question: Am I right?
But in this case we have defined the term "structure" in terms of the metamathematical sense. So, when we say that we are making a model, we speak also in terms of metamathematics.
Now let's talk about the language of set theory. We built up this language to be sufficiently powerful to express all mathematics.
The axioms of $\mathsf{ZF}$ are then given. But then, to attach meaning to the expressions in the language, we actually need to suppose that there is a model of $\mathsf{ZF}$. It means we need to consider that there exist a structure that satisfies the axioms of $\mathsf{ZF}$ (of course in order to be consistent).
Assuming that the axioms of $\mathsf{ZF}$ are consistent, then from the language of set theory and the axioms of $\mathsf{ZF}$ we can build up all of the mathematics.
Question: given that practically everything can be constructed inside the Language of Set Theory (with $\mathsf{ZF}$) and we use it as such, why people say that it is a metatheory?.
Question: if everything can be expressed by means of the Language of Set Theory (with $\mathsf{ZF}$), Meaning most of mathematics we practically use, why to consider a different language? Like for example the language of group theory, the language of arithmetic, etc.
Also, when considering a new language, and a system of axioms, we need to talk about a model that satisfies the axioms(in order to be consistent). But such a model is a structure and people use the axioms of set theory ($\mathsf{ZF}$) to build it up. This makes me think of the next question.
Question: when we say that we are modeling, then are we working outside in the methateory or are we working inside the language of the set theory?
I really appreciate any comment about this topic and also any observation that helps me understand this concepts.
AI: You're mistaken about second-order logic. It is just as a syntactical concept as first-order logic, but the logic itself is not as "nice" as first-order logic, and the idea of a second-order variable already refers, to some extend, to the idea of a set.
Question 3: You're half right. The rules of deduction, are in the language, those are syntactical rules because proofs are syntactical. But a model of a theory is an interpretation to the language, where the axioms are true.
Question 2: We refer to $\sf ZF$ as a metatheory because that is the framework in which we can develop logic (syntax and semantics together, even for "large" languages which include uncountably many symbols) and then we work on other theories inside the universe of set theory.
This is the theory of the universe, it's not the theory of calculus, of arithmetics, or groups. It's the underlying theory. And when we argue about groups, or about arithmetics, or about whatever, we argue in those relevant theories. And sometimes we make arguments about those theories, and an argument about a theory is an argument in the meta-theory (hence the prefix meta).
For example the statement: "Group theory does not prove the statement $\forall x\forall y(x*y=y*x)$" is a statement about the theory of groups, and therefore it is a statement in the meta-theory. Whether or not the meta-theory is $\sf ZF$, or something else, is irrelevant at the moment. But it's a statement about the theory of groups.
Question 3: Because the language of set theory is the meta-language. When we want to talk about the theory of fields we need two operations and two constant symbols. That's the language of fields. The language of set theory is the language of the underlying universe.
This question is the same as asking "If the CPU works with opcodes, why do we need C++, Java, Common Lisp, or Haskell?"
Yes, we can certainly express things with $\in$. But that would completely obscure any possible meaning of anything we would like to write, and you'd have to write incredibly long expressions for pretty much anything.
Question 4: If we agree that $\sf ZF$ is the meta-theory, then the arguments are essentially are in the language of set theory. Of course we don't write the in formal expressions using $\in$, but we can. It's just excruciatingly long, and if you're not using a proof assistant it's also useless.
This is why we have English. |
H: Galois group $\operatorname{Gal}(f/\mathbb{Q})$ of the polynomial $f(x)=(x^2+3)(x^2-2)$
Find the Galois group $\operatorname{Gal}(f/\mathbb{Q})$ of the polynomial $f(x)=(x^2+3)(x^2-2)$.
Any explanations during the demonstration, will be appreciated. Thanks!
AI: The splitting field of $f$ is $\mathbb{Q}(\sqrt{2}, i\sqrt{3})$ and since
$$[\mathbb{Q}(\sqrt{2}, i\sqrt{3}):\mathbb{Q}]=[\mathbb{Q}(\sqrt{2}, i\sqrt{3}):\mathbb{Q}(\sqrt{2})][\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2\cdot 2=4$$
the Galois group has four element.
Since every $\sigma\in$ Gal$(\mathbb{Q}(\sqrt{2}, i\sqrt{3})/\mathbb{Q})$ must take $\sqrt{2}$ and $i\sqrt{3}$ to another root of Irr$(\sqrt{2}, \mathbb{Q})=x^2-2$ and Irr($i\sqrt{3}, \mathbb{Q})=x^2+3$ respectively, we see that $\sigma(\sqrt{2})=\pm \sqrt{2}$ and $\sigma(i\sqrt{3})=\pm i\sqrt{3}$. Thus every element in Gal$(\mathbb{Q}(\sqrt{2}, i\sqrt{3})/\mathbb{Q})$ has order $1$ or $2$. As the only groups of order $4$ are $\mathbb{Z}_4$ and $V_4$ and our group is not cyclic, it must be that the Galois group of $f$ over $\mathbb{Q}$ is $V_4$. |
H: A property of completely separable mad families
A family of sets $\mathcal{A}\subset[\omega]^\omega$ is called almost disjoint (a.d.) iff $\forall a,b\in\mathcal{A}(a\neq b\rightarrow |a\cap b|<\omega)$ and $\mathcal{A}$ is infinite (as such families turn out to be not that interesting if they are finite).
For an a.d. family $\mathcal{A}$ let $\mathcal{I}(\mathcal{A})$ be the ideal on $\omega$ generated by $\mathcal{A}\cup\{\{n\}:\,n\in\omega\}$ and let $\mathcal{I}^+(\mathcal{A})$ be the corresponding coideal $\mathcal{P}(\omega)\setminus\mathcal{I}(\mathcal{A})$.
An a.d. family is called completely separable (or saturated) if for any $b\in\mathcal{I}^+(\mathcal{A})$ there is $a\in\mathcal{A}$ such that $a\subset b$. It is now easy to see that an infinite completely separable a.d. family must be maximal a.d. (i.e., it is not properly included in another a.d. family) as any $A\subset\omega$ witnessing nonmaximality of $\mathcal{A}$ witnesses that $\mathcal{A}$ is not completely separable.
In [1] and [2] one can read (without proof) that any completely separable a.d. family has the property that for any $b\in\mathcal{I}^+(\mathcal{A})$ there already have to be $2^{\aleph_0}$ many $a\in\mathcal{A}$ s.t. $a\subset b$.
A first try was to take an a.d. family $\mathcal{B}$ on $b$ of size $2^{\aleph_0}$ but since it is not guaranteed that its elements are elements of $\mathcal{I}^+(\mathcal{A})$ one does not get that the elements of $\mathcal{B}$ contain elements of $\mathcal{A}$.
So one way is to find an a.d. family of size $2^{\aleph_0}$ inside $[b]^\omega\cap\mathcal{I}^+(\mathcal{A})$ (which should be possible according to [2]), but I didn't know how this could be done.
Does anybody of you know a proof of this or a reference where one can find a proof?
[1] Dilip Raghavan: A model with no strongly separable almost disjoint families, Israel J. Math., vol. 189 (2012), 39–53. (obtainable from http://www.math.toronto.edu/raghavan/ where it appears as 5 in the papers section)
[2] Saharon Shelah: MAD families and SANE players (preprint obtainable from http://arxiv.org/abs/0904.0816)
AI: Choose distinct sets $A_n\in\mathcal A,A_n\subseteq b$ $(n\in\omega)$. Let $B_n$ be an infinite proper subset of $A_n\setminus(A_0\cup\dots\cup A_{n-1})$. Thus the $B_n$'s are pairwise disjoint, and for $i\in\omega$ we have $A_i\not\subseteq\bigcup\{B_n:n\in\omega\}$. Let $\mathcal N\subset[\omega]^{\omega}$ be an a.d. family with $|\mathcal N|=2^{\aleph_0}$. For each $N\in\mathcal N$, let $B_N=\bigcup\{B_n:n\in N\}$. Then $B_N\in\mathcal I^+(\mathcal A)$, since $B_N$ has infinite intersection with infinitely many elements of $\mathcal A$. For each $N\in\mathcal N$ choose $C_N\in\mathcal A$ so that $C_N\subseteq B_N$. Thus $C_N\ne A_i$ for $i\in \omega$.
I claim that the $C_N$'s are distinct. Assume for a contradiction that $C_N=C_{N'}$ for some $N,N'\in\mathcal N,$ $N\ne N'$. Then $C_N\subseteq B_N\cap B_{N'}=\bigcup\{B_i:i\in N\cap N'\}\subseteq\bigcup\{A_i:i\in N\cap N'\}$. Since $N\cap N'$ is finite, it follows that $C_N\cap A_i$ is infinite for some $i$. Since $C_N,A_i\in\mathcal A$ and $C_N\ne A_i$, this contradicts the fact that $\mathcal A$ is almost disjoint.
Edited to correct an error pointed out in a comment by the original poster. |
H: Is $\frac{\sin x^2}{\sin^2 x}$ uniformly continuous on $(0,1)$? Is my proof correct?
I'm said to check if $\dfrac{\sin x^2}{\sin^2 x}$ is uniformly continuous on $(0,1)$
I can see that $\dfrac{\sin x^2}{\sin^2 x}$ is continuous on $(0,1].$ So $\displaystyle\lim_{x\to1-}\dfrac{\sin x^2}{\sin^2 x}$ is finite.
I need to check if $\displaystyle\lim_{x\to0+}\dfrac{\sin x^2}{\sin^2 x}$ is finite.
$\displaystyle\lim_{x\to0+}\dfrac{\sin x^2}{\sin^2 x}\\=\displaystyle\lim_{x\to0+}\dfrac{x\cos x^2}{\sin x\cos x}\\=\displaystyle\lim_{x\to0+}\dfrac{\cos x^2-2x^2\sin x^2}{\cos^2 x-\sin^2 x}=1$
So uniformly continuous. Am I correct?
AI: You are using that if a function is continuous on $(a,b]$ and the limit $\lim_{x\rightarrow a+}f(x)$ exists, then $f$ (or to be a little pedantic, its extension) is uniformly continuous on $[a,b]$. Can you prove this result? If you can, then you are absolutely right in whatever you have done above. |
H: What does square brackets around a polynomial mean?
EDIT: The first paragraph has been indicated as inaccurate, please see this answer.
When we have something like $\mathbb{Z}_2[x]/(x^2 + x + 1)$, we
understand that this means the set of polynomials over indeterminate
$x$ where the coefficients are drawn from $\mathbb{Z}_2 = \{0,1\}$ with
degree less than $\text{deg} (x^2 + x + 1)$.
So, in this set we can include polynomials like $x$, $x+1$ and
even $1$, but not $x^2+1$ or $2x+1$. This is okay so far with me.
But in some lecture note (Rings and fields,
Sergei Silvestrov,
Spring term 2011, Lecture 5; page 3, Example 1
and subsequent ones) I find representations like,
$\mathbb{Z}_2[x]/(x^2 + x + 1) = \{[0], [1], [x], [x + 1]\}$.
I fail to understand why the square brackets have been put around
the polynomials.
Answer to this question says that this depends on the context and
may be same as parentheses or may mean braces to represent sets.
Your help will be appreciated.
AI: Your first three lines aren't accurate: that symbol is for a field , represented by that quotient ring, the elements of which can be expressed as polynomials of degree zero or degree one in one of the roots of the polynomial $\;x^2+x+1\;$ over $\,\Bbb F_2\,$.
Thus, the elements in that "set" are not polynomials but expressions of the form $\,w, w+1,$ , etc., where $\, w^2+w+1=0\pmod 2\,$ , and precisely in order to avoid this confusion is that Silvestrov uses the symbols he does... |
H: What is wrong with this fake proof that $\lim\limits_{n\rightarrow \infty}\sqrt[n]{n!} = 1$?
$$\lim_{n\rightarrow \infty}\sqrt[n]{n!}=\lim_{n\rightarrow \infty}\sqrt[n]{1}*\sqrt[n]{2}\cdots\cdot\sqrt[n]{n}=1\cdot1\cdot\ldots\cdot1=1$$
I already know that this is incorrect but I am wondering why. It probably has something to do with the fact that multiplication in $n!$ is done infinite number of times.
AI: Start by figuring out a simpler example:
$$1 = \lim_{n\to\infty} \frac n n = \lim_{n\to\infty} \frac {1+1+\ldots+1} n = \lim_{n\to\infty} \frac 1 n + \frac 1 n + \ldots + \frac 1 n = 0 + 0 + \ldots + 0 = 0$$
Indeed, you cannot exchange sum (or product) and limit if the amount of terms in the sum or product depend on the limiting variable. |
H: What's the difference between $(\mathbb Z_n,+)$ and $(\mathbb Z_n,*)$?
I noticed that $(\mathbb Z_n,+)$ and $(\mathbb Z_n,*)$ are not the same thing. For example 2 is not invertible in $(\mathbb Z_6,*)$. However, since $\bar2+\bar4=\bar0$, thus it is invertible in $(\mathbb Z_6,+)$.
I found the following problems:
Am I right? If so, why does any book not mention this difference which is very important?
When we study groups, what's the standard operation of $\mathbb Z_n$ when the operation is omitted?
When we study rings, do we generally use $(\mathbb Z_n,+)$ to be the abelian group of the ring $(\mathbb Z_n,+,*)$?
If I work with $(\mathbb Z_n,+)$ I would lose these theorems which I proved for $(\mathbb Z_n,*)$?
An element $a$ in $(\mathbb Z_n,*)$ is an unit iff $a$ and $n$ are coprimes
An element $a$ in $(\mathbb Z_n,*)$ is an zero divisor iff $a$ and
$n$ aren't coprimes
I'm sure there are a lot of student with this same doubt.
I really need help.
Thanks a lot.
AI: 1) Usually, $\,(\Bbb Z_n\,,\,+)\;$ would represent an additive cyclic group, which elements are usually represented as $\,\{\overline0\,,\,\overline 1\,\ldots\,\overline{n-1}\}\;$ or as $\,\{0,1,2,...,n-1\}\pmod n\,$ .
The set $\,(\Bbb Z_n\,,\,+\,,\,\cdot)\,$ represents the ring of residues modulo $\,n\,$ , with the same addition as above and multiplication modulo $\,n\,$ .
The set of invertible elements in the above ring is usually denoted by $\,\Bbb Z_n^*\;$, and sometimes $\,(\Bbb Z_n^*,\cdot)\;$ , and it is the set of elements in $\,\{0,1,2,...,n-1\}\pmod n\,$ which are coprime with $\,n\,$ (and non-zero, to avoid problems). This is a group under the usual multiplication in the above ring.
Of course, what you prove for a set under some operation may, and may not, be true for the same set, or a subset, under a different operation. Perhaps it'll be good you tell us what things were you thinking about. |
H: What is the result of (λx.λy.x + ( λx.x+1) (x+y)) ( λz.z-4 5) 10?
Could you explain what should I do about
λx.λy.x
part?
Thanks.
AI: The parentheses are pretty bad, here is my guess on what the expression should be in order to evaluate to a number in the end:
$(\lambda x.\lambda y.(x + ( \lambda x.x+1) (x+y)))~( (\lambda z.(z-4))~5)~10$
Then you just have to reduce everything.
Here is a step-by-step reduction:
$$\begin{array}{l}
(\lambda x.\lambda y.(x + ( \lambda x.x+1) (x+y)))~( (\lambda z.(z-4))~5)~10\\
\to (\lambda x.\lambda y.(x + (x+y+1)))~(5-4)~10\\
\to (\lambda x.\lambda y.(2x+y+1))~1~10\\
\to (\lambda y.(2*1+y+1))~10\\
\to (2*1+10+1)\\
\to 13
\end{array}$$ |
H: Mean value theorem for essentially bounded functions
I have the problem with the following:
Let $f \in L^{\infty}(\mathbb{R}_+)$ is the mean value theorem in the following form:
Let $0 \leq a < b < \infty$, then
$\int_{a}^{b} |f| \ d\mu = m(b-a)$, for some $m \leq \mathrm{ess} \sup |f|$.
valid for $f$?
We can see that
$$\int_{a}^b |f| \ d\mu \leq \mathrm{ess} \sup |f|(b-a)$$
thus $f \in L^1([a,b])$. Then $\mathrm{ess} \inf |f| = - \mathrm{ess} \sup |f|$, so
$$\mathrm{ess} \inf |f|(b-a) \leq \int_{a}^b |f| \ d\mu.$$ Thus the intermidiate value theorem applied to $|f|$ shows that there exists $x \in [a,b]$ such that $|f(x)| \in [\mathrm{ess} \inf |f|,\mathrm{ess} \sup|f| ]$ and
$$ \int_{a}^b |f| \ d\mu = |f(x)| (b-a) .$$
Is it correct?
How about the following:
Let $0 \leq a < b < \infty$, does it exist a complex number $m$ such that
$\int_{a}^{b} f \ d\mu = m(b-a)$, for some $|m| \leq \mathrm{ess} \sup |f|$.
I think for that we could use that $f= \mathrm{Re}f + i \mathrm{Im} f$ and then apply thesis from the first question (if it is true).
Thank you for any help.
AI: Some parts are correct, others are not. You observed correctly that $\def\esssup{\mathop{\mathrm{ess\,sup}}}\def\essinf{\mathop{\mathrm{ess\,inf}}}\def\abs#1{\left|#1\right|}$
$$
\essinf \abs f \cdot (b-a) \le \int_a^b \abs f \, dx \le \esssup \abs f \cdot (b-a) \tag 1
$$
But in in gerenal doesn't that $\essinf \abs f = -\esssup \abs f$, note for example, that $f \ge 0$. This inequality follows from
$$ \essinf \abs f \le \abs{f(x)} \le \esssup\abs f, \quad \text{a. e. $x \in [a,b]$} $$
Moreover note, that $\abs f$ will in general not be continuous, so you cannot just apply the MVT to it.
To prove you theorem, divide (1) by $b-a$ and obtain
$$ \essinf \abs f \le \frac 1{(b-a)}\int_a^b \abs f \, dx \le \esssup \abs f $$
Now let $m := (b-a)^{-1}\int_a^b \abs f \, dx$, then $m \in [\essinf \abs f, \esssup \abs f]$ and
$$ \int_a^b \abs f\, dx = m(b-a).$$ |
H: Calculate the matrices of $R$ and $R\circ R$ with respect to the basis $(e_1,e_2,e_3,e_4)=(1,x,\frac{1}{2}x^2,\frac{1}{6}x^3)$
I am unsure how to calculate the basis matrices of the linear map defined below. I appreciate your help.
Let $V=\mathbb{Q}[x]_{\le3}$ be the set of polynomials over $\mathbb{Q}$ of degree at most $3$. Define $R:V \rightarrow V$ by $R(f)=xf''-5f$ where $f':=df/dx$. Prove that $R$ is linear (here the base field $K=\mathbb{Q}$). Calculate the matrices of $R$ and $R\circ R$ with respect to the basis $(e_1,e_2,e_3,e_4)=(1,x,\frac{1}{2}x^2,\frac{1}{6}x^3)$.
I proved that it's linear.
Now, $R(1) =-5$, $R(x) =-5x$, $R\left(\frac{1}{2}x^2\right) = x-\frac{5}{2}x^2$, $R\left(\frac{1}{6}x^3\right) = x^2-\frac{5}{6}x^2$.
Then the basis matrix of $R$ is as follows
$$\begin{pmatrix}
-5 & 0 & 0 & 0\\
0 & -5 & 1 & 0\\
0 & 0 & -5/2 & 1\\
0 & 0 & 0 & -5/6
\end{pmatrix}.$$
I want to find the matrices of $R \circ R$, so I thought of multiplying the matrix of R together
$$\begin{pmatrix}
-5 & 0 & 0 & 0\\
0 & -5 & 1 & 0\\
0 & 0 & -5/2 & 1\\
0 & 0 & 0 & -5/6
\end{pmatrix}\begin{pmatrix}
-5 & 0 & 0 & 0\\
0 & -5 & 1 & 0\\
0 & 0 & -5/2 & 1\\
0 & 0 & 0 & -5/6
\end{pmatrix} = \begin{pmatrix}
25 & 0 & 0 & 0\\
0 & 25 & -15/2 & 1\\
0 & 0 & 25/4 & -20/6\\
0 & 0 & 0 & 25/36
\end{pmatrix}.$$
I also tried of $R(R(1)) =25$, $R(R(x)) =25x$, $R\left(R\left(\frac{1}{2}x^2\right)\right) = -10x+\frac{25}{2}x^2$, $R\left(R\left(\frac{1}{6}x^3\right)\right) = -8x-5x^2-\frac{25}{6}x^3$ which results in the matrix
$$\begin{pmatrix}
25 & 0 & 0 & 0\\
0 & 25 & -10 & -8\\
0 & 0 & 25/2 & -5\\
0 & 0 & 0 & 25/6
\end{pmatrix}.$$
Which way is the correct one? Thank you for your time.
AI: The matrix of $R$ is wrong.
For example $R(e_3)=R\left(\frac{1}{2}x^2\right)=x-\frac52x^2=x-5\cdot\left(\frac12x^2\right)=e_2-5e_3$... |
H: using gamma function to simplify integration
I have to evaluate $\int_0^1 x^2 \ln(\frac1x)^3 $.I used gamma function and used substitution $t=\ln (\frac {1}{x})^3$.
In this I get to integrate from $1$ to $-\infty$ with a minus sign outside.Because of this minus sign by interchanging upper and lower limit
I get to integrate from $+\infty$ to $1$.
Since $x>0$ I took this integration from 0 to 1.
Can I make that change of $+\infty$ => 0.
The answer I got is $\Gamma(2)=1$.Is this correct.Any help is appreciated
AI: Let me solve it from the first point. If I got correctly you are working on $$-\int_0^1x^2\ln^3(x)dx$$ Letting $x=\text{e}^{-y}$, the integral becomes $$(-1)^{4}\int_0^{\infty}y^3\text{e}^{-3y}dy$$ Now if we set $3y=u$, the latter integral becomes $$\int_0^{\infty}\frac{u^3}{3^3}\text{e}^{-u}\frac{du}{3}=\frac{1}{3^4}\Gamma(4)$$ |
H: How to test the convergence of the series $\sum_{n=1}^\infty n^{-1-1/n}$?
How to test the convergence of the series $\displaystyle\sum_{n=1}^\infty\frac{1}{n^{1+1/n}}?$
Help me. I'm clueless.
AI: You for $a_n = \frac1n$ and $b_n = \frac{1}{n^{1+1/n}}$ that
$$
\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{n}{n^{1+1/n}} = \lim_{n\to\infty}n^{-1/n} = 1.
$$
Since $\sum a_n$ diverges, so does $\sum b_n$. |
H: Can you tell which are correct terms of the sum of solution of the integral $\int (x^2+1)^n dx $?
According to WolframAlpha,
$$\int (x^2+1)^n dx = x \cdot _2F_1(\frac{1}{2},-n;\frac{1}{3};-x^2)$$
and
$$_2F_1(\frac{1}{2},-n;\frac{1}{3};-x^2)= \sum_{n=0}^{\infty} \frac{1}{3}(-n)\frac{(-x^2)^n}{n!}.$$
Can you tell which are correct terms of the sum of solution of the integral $\int (x^2+1)^n dx $?
these
$$(n=0): x, (n=1): 1/3x(x^2+3), (n=2): 1/3 x(x^4+x^2), \dots$$
or (by using $_2F_1(a,b;c;z)= \sum_{n=0}^{\infty} \frac{(a)_n(b)_n}{(c)_n}\frac{z^n}{n!}$(1)), these which I got $$(n=0): 0, (n=1): 1/3x(x^2), (n=2): 1/3 x(-x^4+x^2), \dots$$
If mine are incorrect, can you compute some terms of the sum (1)?
AI: You are getting the wrong answer. The answer should be
$$ x\,{\mbox{$_2$F$_1$}\left(\frac{1}{2},-n;\,\frac{3}{2};\,-{x}^{2}\right)}. $$
Now, you won't have problems. For $n=0,1,2,3$, we get the corresponding answers
$$ x,\,
\frac{x}{3} \left( {x}^{2}+3 \right),
\, \frac{x}{15} \left( 3\,{x}^{4}+10\,{x}^{2}+15 \right),\, \,
\frac{x}{35} \left( 5\,{x}^{6}+21\,{x}^{4}+35\,{x}^{2}+35 \right). $$
Added: You should use different index for the sum as
$$ _2F_1(\frac{1}{2},-n;\frac{1}{3};-x^2)= \sum_{k=0}^{\infty} \frac{(1/2)_k(-n)_k}{(3/2)_k}\frac{(-x^2)^k}{k!}. $$
Now, you need this identity
$$ (a)_m = \frac{\Gamma(m+a)}{\Gamma(a)}= \frac{(m+a-1)!}{(a-1)!}. $$ |
H: Properties of an alternating bilinear form its coordinate matrix
I found that I lack many basic knowledge about linear algebra, so read the wiki article about Bilinear Forms. Especially this Paragraph. I tried to proof of "Every alternating form is skew-symmetric." was quite easy. And I found a counter-example for the inverse if $char(F) = 2$.
However, I am currently trying to find a proof for this:
A bilinear form is alternating if and only if its coordinate matrix is skew-symmetric and the diagonal entries are all zero
I looked at the following equations, to understand that.
$\begin{pmatrix}v_1 & v_2\end{pmatrix} \cdot \begin{pmatrix}a & b \\ c & d\end{pmatrix} \cdot \begin{pmatrix}v_1 \\ v_2\end{pmatrix} = 0$
$\begin{pmatrix}v_1 & v_2\end{pmatrix} \cdot \begin{pmatrix}a & b \\ c & d\end{pmatrix} \cdot \begin{pmatrix}w_1 \\ w_2\end{pmatrix} = - \begin{pmatrix}w_1 & w_2\end{pmatrix} \cdot \begin{pmatrix}a & b \\ c & d\end{pmatrix} \cdot \begin{pmatrix}v_1 \\ v_2\end{pmatrix}$
And they are both solvable for $a=0, d=0, c=-b$, thus the matrix $\begin{pmatrix}0 & b \\ -b & 0\end{pmatrix}$.
However, I am not any nearer to a proof. Can someone please point me into the right direction?
AI: Hint: Let $\beta$ be a bilinear form and $B$ its coordinate matrix. Applying $\beta$ to two basis vectors $e_i$, $e_j$ gives
$$ \beta(e_i, e_j) = e_i^t \cdot B \cdot e_j = B_{ij} $$
that is the $(i,j)$-entry of $B$. Now if $\beta$ is alternating, then it is skew-symmetric, hence $\beta(e_i, e_j) = -\beta(e_j, e_i)$ and $\beta(e_i, e_i) = 0$. Can you relate this to $B$, using the above?
Edit: As you noted in your comment, this gives $B = -B^t$ with zeros on the diagonal, that is, one direction. For the other direction, note that for arbitrary $x,y$, we have
$$ \beta(x,y) = x^t B y = -x^t B^t y = -y^t B x $$
and that
$$ \beta(x,x) = \sum_{i=1}^n B_{ii}x_i^2$$ |
H: Showing that the matrix transformation $T(f) = x*f'(x)+f''(x)$ is linear
I want to show that the following matrix transformation is linear.
$T(f) = x*f'(x)+f''(x)$
I know I have to show that $T(f+g) = T(f) + T(g)$ but I don't understand what $T(f+g)$ will look like.
Is $T(f+g) = x*(f'(x)+g'(x))+(f''(x)+(g''(x))$ the correct interpretation?
AI: By the definition of $T$, you have for every function $h$ in your domain of $T$, that
$$ T(h) = x\cdot h'(x) + h''(x) $$
If now $h = f+g$, we get
$$ T(f+g) = x \cdot (f+g)'(x) + (f+g)''(x), $$
which equals your calculated result, which moreover used the linearity of differentiation. |
H: A question on inverse functions
Let $f:\mathbb R \to \mathbb R$ is a strictly increasing function and $f^{-1}$ is its inverse function. It satisfies:
$f(x_1)+x_1=a$; $f^{-1}(x_2)+x_2=a$.
What is the value of $x_1+x_2$? Thanks for your help.
AI: It seems the following. Since the function $f$ is strictly increasing, then the functions $f^{-1}$ and $g(x)=f(x)+x$ are strictly increasing too. We have $a=g(x_1)=g(f^{-1}(x_2))$. Then $x_1=f^{-1}(x_2)$ and $x_1+x_2=f^{-1}(x_2)+x_2=a$. |
H: If every irreducible element in $D$ is prime, then $D$ has the unique factorization property.
Suppose every irreducible element in a domain $D$ is prime.
I'm trying to prove this implication:
In a integral domain $D$, if $a=c_1c_2...c_n$ and $a=d_1d_2...d_m$
($c_i,d_i$ irreducible), then $n=m$ and up to order $c_i$ and $d_i$
are associates for every $i$.
My Solution
For each $i$, $c_i$ divides $d_1...d_m$, since $c_i$ is irreducible, hence prime, $c_i$ has to divide some $d_j$, $j\le m$, but $c_i$ and $d_i$ are irreducible, so $c_i$ and $d_j$ are associate.
Is my solution is correct so far?
how can I prove $n=m$?
Thanks in advance
AI: You're doing great, now cancel and continue. (all integral domains are cancellative)
More precisely: prove your claim by induction on $\min(m,n)$. |
H: Every function is the sum of an even function and an odd function in a unique way
It is known that every function $f(x)$ defined on the interval $(-a,a)$ can be represented as the sum of an even function and an odd function. However
How do you prove that this representation is unique?
Thanks for your help.
AI: If $f = g_1 + g_2 = h_1+h_2$ where $g_1,h_1$ are even and $g_2,h_2$ are odd then
$$
g_1 - h_1 = g_2 - h_2 \tag{1}
$$
where the left-hand side of $(1)$ is even and the right-hand side is odd, hence both sides are just $0$. Indeed, it is easy to show just from the definitions that any function which is both even and odd must be a constant $0$. |
H: $f'(x)=f(x)$ and $f(0)=0$ implies that $f(x)=0$ formal proof
How can I prove that if a function is such that $f'(x)=f(x)$ and also $f(0)=0$ then $f(x)=0$ for every $x$. I have an idea but it's too long, I want to know if there is a simple way to do it. Thanks!
Obviously in a formal way.
AI: An implicit assumption is that the function is defined on some open interval containing $0$. Set
$$g(x)=e^{-x}f(x)$$
and compute the derivative:
$$g'(x)=-e^{-x}f(x)+e^{-x}f'(x)=-e^{-x}f(x)+e^{-x}f(x)=0$$
so the function $g$ is constant on the interval where it's defined. Since $g(0)=e^{-0}f(0)=0$ you can conclude that $g(x)=0$ for all $x$ and therefore also $f(x)=0$ for all $x$.
Without the initial assumption, you can get different functions with that property: define, for instance,
$$f(x)=\begin{cases}
0 & \text{if $-1<x<1$}\\
e^x & \text{if $2<x<3$}
\end{cases}$$
Then $f$ satisfies the requirements, but it's not constant. |
H: List the set of points of discontinuity of piecewise function
List the set of points of discontinuity of $f:(0,\infty)\to\mathbb R$, defined by
$$f(x)=\begin{cases}x-[x]\text{ if [x] is even}\\1-x+[x]\text{ if [x] is odd}\end{cases}$$
AI: We see $$x\in(0,1), \text{floor}(x)=0 ~~~~~\text{so}~~~f(x)=x$$ $$x\in[1,2), \text{floor}(x)=1 ~~~~~\text{so}~~~f(x)=2-x$$ $$x\in[2,3), \text{floor}(x)=2 ~~~~~\text{so}~~~f(x)=x-2\\\ .\\\ .\\\ .$$ It seems the function is continuous. |
H: Why $L$ is the eigenspace of $L_A$?
$A=\frac{1}{\sqrt{5}}\begin{pmatrix} 1&2\\2&-1 \end{pmatrix}$
Let $L_A$ be a reflection of $R^2$ about a line $L$ through the origin.
Then $L$ is the one dimensional eigenspace of $L_A$ corresponding to the eigenvalue 1.
I know the eigenvalues are 1,-1. But why it should be 1?
AI: Because having an eigenvalue 1 in a direction means that vectors in that direction are not changed, so since your reflection has a unique fixed line, it must be in the eigendirection corresponding to 1. |
H: Numbers of students registered for various courses
I have a probability problem that I’m struggling with. Any help would be appreciated.
Here is the problem:
A class has 28 students. In this class, each student is registered in 2 languages courses (at most) chosen among English, German or Spanish. The number of students registered in each of these 3 languages lessons is 25 for English, 18 for German and 13 for Spanish.
1) There are many students registered for both English and Spanish. True or False ?
2) All Students registered in German are registered in “English” ? True ? False ? We cannot tell ?
I cannot figure out how to find out this.
Can you please help on this?
Regards,
AI: Here's how to arrive at the solution:
There are 28 students in the class, 25 of which are in the English course. So how many students are not in the English course?
There are 13 students in the Spanish course. Given the answer to the previous question, how many at least have to also be in the English course? (This gives you the answer to the first question)
Given the last answer, what is the minimum number of people attending the English course that also attend the Spanish course?
Given that no one has all three courses, how many people in the English course can at most attend the German course?
How does that number compare to the number of people attending the German course? (This answers the second question) |
H: How to simplify $a^n - b^n$?
How to simplify $a^n - b^n$?
If it would be $(a+b)^n$, then I could use the binomial theorem, but it's a bit different, and I have no idea how to solve it.
Thanks in advance.
AI: If you are looking for this???
$$
a^n-b^n=(a-b)\Big(\sum_{i=0}^{n-1}a^{n-1-i}b^i\Big)
$$ |
H: Is this series convergent? $\sum_{i=1}^{\infty} \frac{(\log n)^2}{n^2}$
$\sum_{i=1}^{\infty} \frac{(\log n)^2}{n^2}$
I guess it is convergent, so I apply comparsion test for this.
$\frac{log^2}{n^2} < \frac{n^2}{n^2} = 1$
So it is bounded by 1 and hence it is convergent.
Can I do in this way? It kind of make sense, but I have never deal with constant in this case
Usually we do bounding using another term depend on $n$?
AI: You can't do that way as Gerry mentioned. However, you know that $\log^2n\leq\sqrt n$ for $n$ big enough and that's why for big $n$ you have $\frac{(\log n)^2}{n^2}\leq \frac{1}{n\sqrt{n}}$. For the latter series do converge. |
H: A question on a periodic function
Let $f(x)$ be a bounded real function on $\mathbb R$ and for any $x \in \mathbb R$
$$
f(x+\frac{13} {42})+f(x)=f(x+\frac16)+f(x+\frac17) \tag1.
$$
What is the fastest way to compute the period of the function?
Thanks for your help.
AI: Without any extra assumptions on $f$, you can take this example: Let $$f(x) = \begin{cases} 1 & x \in \mathbb{Q} \\ 0 & x \notin \mathbb{Q}\end{cases}$$
This function satisfies your equation and is periodic with period $r$ for any rational number $r$. (And has no smallest period.) |
H: How to prove two subspaces are complementary
To give some context, I'm continuing my question here.
Let $U$ be a vector space over a field $F$ and $p, q: U \rightarrow U$ linear maps. Assume $p+q = \text{id}_U$ and $pq=0$. Let $K=\ker(p)$ and $L=\ker(q)$. From the previous question, it is proven that $p^2=p$, $qp=0$ and $K=\text{im}(q)$.
Now I want to prove that $U=K \oplus L$.
By definition, I know that if they are complementary, then $K \cap L = \left\{0 \right\}$ and $K + L =U$. Also, from my lecture notes it mentions a certain proposition that when applied to this question, I can say that subspaces $K$ and $L$ of a vector space $U$ are complementary if and only if each vector $u \in U$ can be written uniquely as $u = k + l$, where $k \in K$ and $l \in L$.
How do I go about proving this? It's hard for me to know what's going on. I would appreciate it if someone could help me draw a diagram representing the linear transformations of $p$ and $q$ as it is easier for me to see. Thank you for your time.
AI: $\Rightarrow$
Let $u\in U$. Since $U=K+L$ so $u=k+l$ where $k\in K$ and $l\in L$ and for uniqueness suppose that $u=k'+l'$ then
$$k-k'=l'-l\in K\cap L=\{0\}$$
hence $k=k'$ and $l=l'$.
$\Leftarrow$
Let $u\in U$ then there's $k\in K$ and $l\in L$ s.t. $u=k+l$ so $U=K+L$ and if $x\in K\cap L$ then
$$x=x+0=0+x$$
and by uniqueness $x=0$ so
$$K\cap L=\{0\}$$ |
H: Representing a real valued function as a sum of odd and even functions
With $f(x)$ being a real valued function we can write it as a sum of an odd function $m(x)$ and an even function $n(x)$:
$f(x)=m(x)+n(x)$
Write an equation for $f(-x)$ in terms of $m(x)$ and $n(x)$:
My attempt using the properties - even function if: $f(x)=f(-x)$ and odd function if: $-f(x)=f(-x)$
$f(x)=m(x)+n(x) \implies f(-x)=m(-x)+n(-x) \implies f(-x)= -m(x)+n(x) \implies f(-x)=n(x)-m(x)$
I think that is correct but then I need to find equations for both $m(x)$ and $n(x)$ in terms of $f(x)$ and $f(-x)$ so a suggestion on how to tackle that would be great.
AI: $$f(x)=\frac{1}{2}\left(f(x)+f(-x)\right)+\frac{1}{2}\left(f(x)-f(-x)\right)$$ |
H: How to integrate a function of $y$ over a polygon
I am given the coordinates of the vertices of a polygon and I need to integrate a function of $f$ (shown in the picture) to solve my problem. How can I do it? Btw. I will use it in a software which I am developing as a term project. So, I am looking for a programmable solution
AI: You could make use of Green's theorem. What is interesting is that the function you are integrating is a function of $y$ only. So when you examine the form of Green's theorem
$$\iint_D \left ( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right ) dx \, dy = \oint_{\partial D} (P dx + Q dy)$$
where $D$ is the polygon, then clearly $\partial Q/\partial x = 0$. Clearly we are left with
$$- \frac{\partial P}{\partial y} = f(y)$$
so that, within a constant
$$P(y) = -a \frac{y^2}{b} + a \frac{y^3}{3 b^2}$$
We still do not know $Q$ except that it too must be a function of $y$. It turns out that this is not important because we are integrating about a closed loop, so ultimately its contribution is zero; thus, we may as well set $Q=0$. We are then left with the following integral:
$$ \oint_{\partial D} P(y) dx$$
as the integral sought. To evaluate, simply parametrize each segment of the polygon as some line $y(x) = a x+b$ and integrate accordingly. The integral will be zero along vertical segments. |
H: Integral inequality, $f$ continuous, increasing function
Let $f$ be a continuous, increasing function on $[a,b]$, $c$ is the middle of $[a,b]$.
Prove that $\frac{f(a)+f(c)}{2} \le \frac{1}{b-a} \cdot \int _a ^b f(x)dx \le \frac{f(b)+f(c)}{2} $ .
Could you help me with that?
I thought I may use intermediate value theorem, but I didn't come up with anything.
AI: We have
$$\int_a^b f(x)dx=\int_a^c f(x)dx+\int_c^b f(x)dx$$
and since $f$ is increasing function then
$$\int_a^cf(a)dx=\frac{f(a)(b-a)}{2}\leq\int_a^cf(x)dx\leq\int_a^cf(c)dx=\frac{f(c)(b-a)}{2}$$
and
$$\int_c^bf(c)dx=\frac{f(c)(b-a)}{2}\leq\int_c^bf(x)dx\leq\int_c^bf(b)dx=\frac{f(b)(b-a)}{2}$$
Now add the two inequalities. |
H: Example of a $\kappa$-long sequence of disjoint club subsets of regular cardinal $\kappa$
I'm self-studying set theory and got stuck on this exercise:
Let $\kappa$ be a regular cardinal. Give an example of a sequence $\langle C_\alpha\mid\alpha<\kappa\rangle$ such that $C_\alpha$ is club in $\kappa$ for every $\alpha<\kappa$, but
$$\bigcap\{C_\alpha\mid\alpha<\kappa\}=\emptyset.$$
To make some progress at all, I've tried specific cases. For $\kappa=\omega$, I define $f:\omega\to\omega$ as taking $n$ to the $n$'th prime, and then define
$$C_n:=\{f(n)k\mid k<\omega\},$$
which is clearly unbounded ($|C_n|=\aleph_0$) and closed (every $n\cap C_n$ will be finite, so $\sup(n\cap C_n)=\max(n\cap C_n)\in C_n$). Furthermore due to the $f(n)$'s being primes, we have
$$\bigcap\{C_n\mid n<\omega\}=\emptyset.$$
I'm trying to do the case with $\kappa=2^{\aleph_0}$ now, where I'm trying to exploit that $|\mathbb{R}|=2^{\aleph_0}$ and $|[0,1]|=2^{\aleph_0}$, so if we set $C_n:=[n,n+1]$ then the $C_n$'s are clubs (unbounded due to cardinality and closed due to closed intervals preserve limits) and disjoint. But I can only construct countably many such $C_n$'s, so it doesn't satisfy the example of needing $2^{\aleph_0}$ many disjoint clubs.
If you guys would be so kind as to hint me in the right direction, whether it be about the $2^{\aleph_0}$ case, the general case, or about how I might generalize my methods from my special cases to the general, I would appreciate it a great deal.
Thanks.
AI: First of all, when talking about clubs we generally require that $\kappa$ has uncountable cofinality. There is a problem with clubs for cardinals whose cofinality is $\omega$, because one unbounded sequence without limit points is not really a club (but still closed and unbounded). It's true this question is true for the countable case, but its solution doesn't necessarily generalize. It's better to start with the uncountable case, and deduce from it the countable case (and seeing how you solved the countable case, you only need to solve the uncountable case anyway).
Secondly, you can't use $\Bbb R$. It's not a well-order. The idea is to use the order topology of the cardinal, not of an ordered set of the same cardinality. If you note, the real numbers is an ordered set whose cofinality is $\omega$, as the natural numbers is a cofinal sequence in that order. Furthermore, $[0,1]$ is not unbounded in $\Bbb R$ as it is very much bounded by $0$ and by $1$.
Thirdly, $2^{\aleph_0}$ can be singular. It is consistent, for example that $2^{\aleph_0}=\aleph_{\omega_1}$.
Fourthly, clubs cannot be disjoint. The whole idea is that clubs represent "almost everything", and when you intersection two "almost everything", you should get "almost everything" again. The exercise asks for you to show that it is possible to have a sequence of $\kappa$ clubs, that the intersection of all of them is empty.
Finally, to hint you for the solution, think about end-segments of $\kappa$. (with its order as an ordinal!) |
H: How to check if $x^TCx\geq0$?
I have the next 3x3 block matrix C, where each block is a square matrix.
$$
C = \begin{bmatrix}
0 & A & B \\
0 & A+K_1 & B \\
0 & A & B+K_2
\end{bmatrix},
$$
where $K_i$ is also a squared diagonal matrix (I have freedom choosing their elements). I want to be sure that $x^TCx \geq 0$ choosing the appropriated $K_i$.
EDIT
The answer is no
AI: Unless $A=B=0$, this is an impossible mission.
Let $x^\top =(u^\top ,v^\top ,w^\top )$, where the partitioning of $x$ conforms to the partitioning of $C$. If $x^\top Cx\ge0$ for all $x$, then in particular, this is true for $w=0$, i.e. $x^\top Cx=u^\top Av + v^\top (A+K_1)v\ge0$. However, if $A\ne0$, then for every $v\notin\ker(A)$, we can always find a vector $u$ (e.g. $u=-kAv$ for a large positive $k$) such that $u^\top Av$ and in turn $x^\top Cx$ are as negative as possible.
Put it another way, $x^\top Cx\ge0$ for all $x$ if and only if $P=C+C^\top $ is positive semidefinite. For a positive semidefinite matrix $P$, if it has a zero on its diagonal, the row and column containing this zero diagonal entry must also be zero, otherwise $P$ will contain a $2\times2$ submatrix that has negative determinant. Now the $(1,1)$-th subblock of $C+C^\top $ is zero, but the $(1,2)$-th and $(1,3)$-th subblocks are $A$ and $B$. So, $C+C^\top $ cannot be positive semidefinite when $A\ne0$ or $B\ne0$. |
H: Finding a basis with Change of Basis
Find the coordinate vector for v relative to the basis S = {v1, v2, v3} for $R^3$.
$$v = (2,-1,3);$$
$$v1 = (1,0,0); v2 = (2,2,0); v3 = (3,3,3);$$
So I did and I got the coordinate vector space as (v)s = (3, -2, 1). I do not believe that this answer is wrong. So now I am at the next question.
Consider the coordinate vector:
$$ [w]s =
\left[ \begin{array}{c}
6 \\
-1 \\
4
\end{array} \right]$$
Find w if S is the basis of the first question.
I have no idea how to do this as my textbook is not accurate. Can someone please help me.
AI: Your answer to the first part is correct.
The second question is essentially the reverse process. If you know what the coordinate vector for w is relative to the basis S, can you "unpack" that coordinate vector to recover the original vector w? |
H: What happens outside radius of convergence
A real power series $\sum_{n=0}^\infty a_n z^n$ has radius of convergence $R$. I am able to prove that for any real number $r>R$, the sequence $|a_n|r^n$ must be unbounded. Must it also tend to $\infty$? Please give me some hints, thanks.
AI: Let $a_n=\frac{1}{n!}$ for $n$ even, and let $a_n=1$ for $n$ odd.
Then the radius of convergence of the series $\sum_{n=0}^\infty a_nz^n$ is $1$, but for fixed $r\gt 1$ it is not the case that $|a_n|r^n\to\infty$. |
H: Give an equation of the plane parallel to $3x-12y+4z=0$ and tangent to the surface $x^2+y^2+z^2=676$
Give an equation of the plane parallel to $3x-12y+4z=0$ and tangent to the surface $x^2+y^2+z^2=676$
What I tried:
the normal vector is:
$$
[3,-12, 4]
$$
From the equation:
$$
[2x_0, 2y_0, 2z_0]
$$
It seems, that the tangent point is:
$$
P\left(\frac{3}{2}, -6, 2\right)
$$
but it's not and I don't know why.
Regards.
AI: $P\left(\frac{3}{2}, -6, 2\right)$ is not on the sourface.
Note that the gradient is not necessarily equal to the normal, is just proportional. Solve
$$[2x_0, 2y_0, 2z_0]= \lambda [3,-12, 4]$$
and check for which values of $\lambda$ the solution is on your surface. |
H: Given $\int _0 ^{+ \infty} \frac{1}{(1+x)^a} =1, \ \ a =?$
Given $\int_0^{+ \infty} \frac{1}{(1+x)^a} =1$ what is the value of $a$?.
I know that $\int_0^{+ \infty} \frac{1}{(1+x)^2} =1$.
Are there any other solutions?
Could you help me?
AI: Possible solution: $$\int_0^{+ \infty} \frac{1}{(1+x)^a} =1$$
$$\therefore \int_0^{+ \infty} (1+x)^{-a} =1$$
$$ \therefore \frac {(1+x)^{1-a}}{1-a}|_0^{+ \infty} =1$$
$$ \therefore \frac {1-a}{(1+x)^{1-a}}|_0^{+ \infty} =1$$
$$\therefore \frac{1-a}{\infty}-\frac{1-a}{1}=1$$
$$\therefore -(1-a)=1$$
Your answer: $$\therefore a=2$$ |
H: Proof that stochastic process on infinite graph ends in finite step.
Infinite Graph
Let $G$ be an infinite graph that is constructed this way: start with two unconnected nodes $v_1$ and $u_1$. We call this "level 1".
Create two more unconnected nodes $v_2$ and $u_2$. Connect $v_1$ to both of them with directed edges pointing from $v_1$ to $v_2$, and from $v_1$ to $u_2$ respectively. Then, connect $u_1$ to both of them using directed edges in the same way. This is "level 2".
Repeat this process. At level $n$, nodes $v_n$ and $u_n$ are both connected to the each of the next two nodes $v_{n+1}$ and $u_{n+1}$. This results in an infinite connected graph.
Stochastic Process
We define a discrete time process heuristically on this graph. $v_1$ and $u_1$ are initially "infected". This infection only last for one time period. We start at $t = 0$.
At each time step, infected nodes have an independent probability $p$ of passing the infection to their neighbors (infected node must have a directed edge to the target for the target to be a neighbor). If this happens, the neighbor is infected for one time period.
E.g. $v_1$ is connected to $v_2$ and $u_2$. $v_1$ is infected initially. At time $t=0$, $v_1$ has a probability $p$ of infecting its neighbors $v_2$ and $u_2$. Suppose it successfully infects $v_2$, and $u_2$ is never infected by any of its predecessor. At $t=1$, $v_1$ and $u_1$ stops being infectious. And $v_2$ is infectious.
This process is repeated for each level. If a node is infected twice, it is still just "infected", there is no special meaning to a double infection.
Conjecture
This process ends with probability $1$ after a finite number of steps.
How do I prove this? My attempt: The probability that the infection on one level is passed onto the next level is $q = 1 - (1-p)^4 = $ probability that all $4$ attempts to infect fails. For the infection to reach the $k$th level, the probability is $q^k$ and this tends to zero as $k$ tends to infinity?
AI: If you direct the edges then the problem becomes trivial.
Assume inductively that at time $i$ the only nodes that might be infected are $u_i$ and $v_i$.
Then at time $i+1$ nodes $u_{i+1}$ and $v_{i+1}$ may become infected by $u_i$ and $v_i$, but $u_i$ and $v_i$ become cured, and there is no way of reinfecting them from below.
Thus your argument is correct, and the probability that the process is alive at time $i$ is at most $q^i$.
Now, for any natural number $n$, if the process does not terminate after a finite number of levels, then the process much reach level $n$.
Therefore the probablity that the process does not terminate is less than or equal to the probability that the process reaches level $n$.
So if $A$ is the event that the process survives indefinitely we have
$$0\leq\mathbb P(A) \leq q^n$$for every $n\in\mathbb N$. Therefore we must have $\mathbb P(A) = 0$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.