text
stringlengths
83
79.5k
H: Definition of notation $\mathbb Z_n$ What does the notation $\mathbb Z_n$ mean, where $n$ is also an integer. I have only seen $n$ being a positive integer up to now. some examples are $\mathbb Z_2$ or $\mathbb Z_3$ This is the context: How to prove $x^{2}+x=1$ has a solution in $\mathbb{Z}_{p}$ if and only if $p=5$ or $p\equiv \pm1\bmod 5$ AI: $\mathbb{Z}_n$ is another (shorter) name for $\mathbb{Z}/n\mathbb{Z}$, the ring of residue classes modulo $n$. A residue class modulo $n$ is the set of all integers which give the same rest when divided by $n$. There are exactly $n$ residue classes, corresponding to the $n$ reminders on division by $n$, $0$ to $n-1$. The key point is that the reminder of $i+j$ on division by $n$ is the sum of the individual reminders, and the analogue is true for the product. So no matter which member of two given residue classes you choose, the sum will always be in the same residue class, as will be the product. It only makes sense if $n$ is an integer $\ge 2$. If $n$ happens to be prime, it is a field, that's why those are most interesting. Strictly speaking you could also write $\mathbb{Z}_1$, but that would only contain one element.
H: A question on Pixley-Roy topology Let $R$ be Real line and let $F[R]$ be $\{x\subset R:\text{is finite}\}$ with Pixley-Roy topology. Definition of Pixley-Roy topology is this: Basic neighborhoods of $F\in F[R]$ are the sets $$[F,V]=\{H\in F[R]; F\subseteq H\subseteq V\}$$ for open sets $V\supseteq F$, see e.g. here. Is it submetrizable? Thanks for your help:) AI: Let $\mathscr{F}$ be the Pixley-Roy space in question; then $\mathscr{F}$ is submetrizable. If $U\subseteq\Bbb R$ is open, let $U^+=\{F\in\mathscr{F}:F\cap U\ne\varnothing\}$ and $U^-=\{F\in\mathscr{F}:F\subseteq U\}$. If $F\in U^-$, then $F\in[F,U]\subseteq U^-$, and if $F\in U^+$, then $F\in[F\cap U,\Bbb R]\subseteq U^+$, so $U^-$ and $U^+$ are open in $\mathscr{F}$. Let $$\mathscr{S}=\{U^-:U\text{ is open in }\Bbb R\}\cup\{U^+:U\text{ is open in }\Bbb R\}\;;$$ clearly $\mathscr{S}$ is a subbase for a topology $\tau$ coarser than the Pixley-Roy topology on on $\mathscr{F}$. In fact $\tau$ is easily seen to be the Vietoris topology on $\mathscr{F}$. It’s well-known that if $X$ is metrizable, the Vietoris topology on the space $\mathscr{K}(X)$ of non-empty compact subsets of $X$ is metrizable; a proof can be found in this PDF. $\langle\mathscr{F},\tau\rangle$ is a subspace of $\mathscr{K}(\Bbb R)$ with the Vietoris topology, so $\langle\mathscr{F},\tau\rangle$ is metrizable, and $\mathscr{F}$ is submetrizable.
H: The probability of an account being chosen Current: $140$ 1-30 days past due: $80$ 31-60 days past due: $40$ 61-90 days past due: $25$ Sent for collection: $15$ What is the probability that $2$ current accounts and $2$ 1-30 days past due accounts will be chosen? Using multiplication rule: $$\frac{140}{300}\frac{139}{299}\frac{80}{298}\frac{79}{297} = 0.0155$$ Using combination method: $$\frac{\binom{80}{2}\binom{140}{2}}{\binom{300}{4}} = 0.093$$ Although the two methods is similar in meaning, the results are not the same. What is wrong and which one is the correct one? AI: When using the multiplicative rule, you've done it in order of current, current, 1-30 days past due, 1-30 days past due but we don't actually care about the order. There are six orderings we could have had for these choices AABB BBAA ABBA BAAB BABA ABAB hence the value should be multiplied by six which then agrees with your other method.
H: Trigonometry - Addition Theorem Finding Another Trig function Using the expansion of $\sin(A + B)$, prove that $\tan 75^\circ = 2 + \sqrt 3$ AI: Remember $\tan(\theta)=\frac{\sin(\theta)}{\cos(\theta)}=\frac{\sin(\theta)}{\sin(90-\theta)}$ so now we've expressed tan purely in terms of sin. Next you need to think of angles that you know sin of that add (or subtract) to give the desired angle
H: Simple question: Finding the number of arrangements $3$ math books, $5$ English books, $4$ science books and a dictionary are to be placed on a student's shelf so that the books of each subject remain together. In how many different ways can the books be arranged? $$3!\space5!\space4!\space4!$$ $3!$ ways to arrange the math books, $5$ for English, $4$ for science, and $4!$ ways to arrange all these different subject books around. In how many of these will the dictionary be next to the maths books? I'm not quite sure how to do the second question. Can someone please illustrate how I could find out the number of arrangements in this case.. A photo that illustrate this would be nice.. I can't quite visualize it too well. AI: For the placement of subjects, there are $4$ slots that each subject can be placed in. With the added constraint that the dictionary and math books next to each other, the math books and dictionary will be considered together as a single "unit". In other words, there will be a math/dictionary slot, which along with science and English makes for a total of $3$ slots rather than $4$. In addition, there are also 2 possible permutations among the math books and the dictionary, so the total number is: $$3!\space5!\space4!\space3!\space2!$$
H: Find peak output value using transfer function I have a filter with Transfer function $H(z)=(1-0.5z^{-1})(1+0.5z^{-1})$ designed for a sampling rate of 800 samples/s. How to find peak output if a sine of 200Hz and amplitude 4 is applied as input? AI: Your input signal is $$x(n)=A\sin n\theta_0$$ with $A=4$ and $\theta_0=2\pi f/f_s=2\pi\cdot 200/800=\pi/2$ The frequency response of your system is $H(z)$ evaluated on the unit circle, i.e. for $z=e^{j\theta}$: $$H(e^{j\theta})=|H(e^{j\theta})|e^{j\phi(\theta)}$$ where $\phi(\theta)$ is the phase response of $H(e^{j\theta})$. The output signal $y(n)$ is then given by $$y(n)=|H(e^{j\theta_0})|A\sin (n\theta_0+\phi(\theta_0)\})$$ So the peak amplitude of the output signal is $|H(e^{j\theta_0})|A$.
H: Recursive function into non-recursive I have to express $a_n$ in terms of $n$. How do I convert this recursive function into a non-recursive one? Is there any methodology to follow in order to do the same with any recursively defined function? Thanks. $$a_n = \begin{cases} 0 & \text{if }n=1\\ a_{n-1}+n-1 & \text{if }n \ge 2 \end{cases} $$ So the results would be: $$ a_1 = 0 \\ a_2 = 1 \\ a_3 = 3 \\ a_4 = 6 \\ a_5 = 10 \\ ... \\ $$ AI: OK, with the corrected recursion formula we get: $$a_1=0\\ a_2=0+(2-1)=0+1\\ a_3=0+(2-1)+(3-1)=0+1+2\\ a_4=0+(2-1)+(3-1)+(4-1)=0+1+2+3\\\vdots\\ a_n=\sum_{k=0}^{n-1}k=\frac{n(n-1)}{2}$$
H: Expansion of $z^3 \log ( (z-a)/(z-b))$ in $\infty$ I need some hints to evaluate the expansion of $z^3 \log((z-a)/(z-b))$ in $\infty$. I thougt that evaluating $\log(\frac 1 z -a)$ in $z = 0$ may be helpful. How can I proceed ? AI: Let $u=\frac{1}{z}$ then $$z^3 \log ( (z-a)/(z-b))=\frac{1}{u^3}\left(\log(1-au)-\log(1-bu)\right)=\frac{1}{u^3}(-u(a-b)-\frac{u^2}{2}(a^2-b^2)-\frac{u^3}{3}(a^3-b^3)+o(u^3))$$ hence we find $$z^3 \log ( (z-a)/(z-b))=-z^2(a-b)-\frac{z}{2}(a^2-b^2)-\frac{1}{3}(a^3-b^3)+o(1)$$
H: Book recommendations for studying mathematical areas based on set theory I am at the end of my studies with set theory, and I would like to continue in fundamental fashion, and study for example calculus based on set theory. So, I am talking about not calculus the way it is studied in college, but calculus studied from the set theory perspective. Every time a theorem is talked about or else, it should be said how it relates to set theory. I am looking book of that type, so please let me know if you know some. Thank you AI: It seems to me that you are trying to understand the set theoretical foundation of calculus. This would be equivalent to learning how to program in C++ and then insisting to learn how the CPU interprets the compiled code, and how the compiler works. It is a useful knowledge, but not very useful for C++, or in this case -- for calculus. If you wish to learn more about the interactions of set theory with other fields of mathematics, I suggest that first you get comfortable with following set theoretical related topics: Descriptive set theory, Basic topology, Cardinal arithmetic and basic PCF theory, Model theory. Then you can apply these into measure theory, which is the modern extension of calculus; set theoretical topology; abstract algebra (many courses in advance model theory basically amount to algebra and algebraic geometry). Studying these topics could take a couple of years, and by then you may find yourself interested in set theory per se. Let me give some basic recommendations for books. Moschovakis - Descriptive Set Theory. Engelking - General Topology. Holtz, Steffens, Weitz - Introduction to Cardinal Arithmetic. Chang, Keisler - Model Theory.
H: Each finite extension of a field, has a finite number of intermediate extensions. Prove that every finite extension $K$ of a field $F$, has a finite number of intermediate extensions. EDIT: All fields here are of characteristic $0$, otherwise we would need to require the extension to be separable. AI: Each finite extension $K$ can be injected into a normal extension $K'$. Then $|G(K',F)|=[K':F]=n<\infty$. However, $G(K,F)<G(K',F)$. Therefore $|G(K,F)|=[K:F]\leq n<\infty$. Now $|G(K,F)|$ can only have a finite number of subgroups, then by the Fundamental theorem of Galois theory, there can only be a finite number of intermediate fields. q.e.d. N.B. What is more $$|G(K,F)|=|G(K',K)|=[K':K]$$ and $$|G(K',K):G(K,F)|=\frac{|G(K',K)|}{|G(K,F)|}=\frac{[K',F]}{[K',K]}=[K:F]$$
H: For each normal extension of a field, whose Galois group is commutative, each intermediate extension is also normal. Let $K$ be a normal extension of the field $F$, and let the Galois group $G(K,F)$ be an Abelian group. Prove that each intermediate extension $E$ is also a normal extension. EDIT: All fields here are of characteristic $0$, otherwise we would need to require the extension to be separable. AI: Every sub-group of an Abelian is normal. By the fundamental theorem of Galois theory $G(E,F)\triangleleft G(K,F)\Leftrightarrow K^{G(E,F)}=E\triangleleft K$. q.e.d.
H: How does $Ae^{4ix}+Be^{-4ix}=A\cos(4x)+B\sin(4x)$? $e^{ix}=\cos(x)+i\sin(x)$ $Ae^{4ix}=A(\cos(4x)+i\sin(4x))$ $Be^{-4ix}=B(-\cos(4x)-i\sin(4x))$ What am I doing wrong? I am trying to find the complimentary function of $\frac{d^2y}{dx^2} +16y=8cos(4x)$ C.F: $x^2+16=0$ $x=4i$ and $x=-4i$ $$Ae^{4ix}+Be^{-4ix}$$ $\phi\pm\beta i$ $$\Rightarrow e^{\phi x}(Asin\beta x+Bcos\beta x) $$ AI: You should get $Be^{-4ix}=B(\cos(-4x)+i\sin(-4x))=B(\cos(4x)-i\sin(4x))$ since $\cos(-a)=\cos a$ and $\sin(-a)=-\sin a$. By adding $Ae^{4ix}=A(\cos(4x)+i\sin(4x))$ and $Be^{-4ix}=B(\cos(4x)-i\sin(4x))$ you get $$Ae^{4ix}+Be^{-4ix}=(A+B)\cos(4x)+i(A-B)\sin(4x).$$ EDIT: (Now that you have added that you were asking because solving a differential equation.) We have expressed $Ae^{4ix}+Be^{-4ix}$ in the form $C\cos 4x+D\sin 4x$. Notice that we can get arbitrary $C,D\in\mathbb C$ (by choosing appropriate $A$, $B$.) So we have $y(x)=C\cos 4x+D\sin 4x$ as a general solution of $y''+16y=0$. EDIT 2: To be more precise: For any given $C$, $D$ we can find $A$ and $B$ such that $A+B=C$, $i(A-B)=D$ by solving the system of two linear equations. Namely we get $B=(C+iD)/2$ and $A=(C-iD)/2$. Basically the thing I am trying to get is explained in different words in kahen's comment.
H: Prove that $x+e^{2x}=1$ have only one solution I`m trying to prove that this equation have only one solution. $$x+e^{2x}=1$$ so what I did is to set $\ln$ on this equation and get: $$\ln(x)+2x=0$$ I need some hint how to continue from here. Thanks! AI: Hint: Ignore what you did. Consider the function $f(x)=x+e^{2x}-1$. Relate this function to your problem somehow and use the intermediate value theorem. This takes care of the existence of one solution. To ensure it's unique, think about $f'$. Regarding your work, note that the equations you got aren't equivalent due to the fact that the LHS of the first equation makes sense on a bigger set than the LHS of the second equation.
H: How to solve the equation $\sum_{k=0}^{n}x^kC_{n}^{k}\cos{k\theta}=0$ Find all real numbers $x$, such that $$\sum_{k=0}^{n}x^kC_{n}^{k}\cos{ka}=0$$ My idea: we can find this value $$\sum_{k=0}^{n}x^kC_{n}^{k}\cos{ka}.$$ use Elur $e^{ikx}=\cos{kx}+i\sin{kx}\Longrightarrow 2\cos{2kx}=e^{ikx}+e^{-ikx}$, $$2x^kC_{n}^{k}\cos{ka}=((e^{ia}x)^k+(e^{-ia}x)^k]C_{n}^{k}$$ so $$2\sum_{k=0}^{n}x^kC_{n}^{k}\cos{ka}=2\sum_{k=0}^{n}(xe^{ia})^kC_{n}^{k}+(xe^{-ia})^{k}C_{n}^{k}=(xe^{ia}+1)^n+(xe^{-ia}+1)^n$$ AI: Let's calculate $$ \sum x^k C_n^k\cos k\theta =\mathfrak{Re} \sum C_n^k x^k e^{ik\theta}=\mathfrak{Re}(1+xe^{i\theta})^n. $$ If $\mathfrak{Re}(1+xe^{i\theta})^n=0$, then $$ \mathfrak{Arg}(1+xe^{i\theta})^n=\pm \frac{1}{2}\pi,\\ \mathfrak{Arg}(1+xe^{i\theta})=\frac{2k+1}{2n}\pi, $$ the rest is easy. You can show that when and only when $$ 2\theta-\pi=\frac{2k+1}{2n}\pi, $$ you have a real solution $$ x=2\sin \frac{2k+1}{2n}\pi. $$
H: Can I exchange limit and differentiation for a sequence of smooth functions? Let $(f_n)_{n\in \mathbb N}$ be a sequence of smooth functions converging to some $f$. Under what circumstances can I exchange limit and derivative?, i.e. $$\lim_{n\rightarrow \infty} \frac{\partial f_n(x)}{\partial x} = \frac{\partial f(x)}{\partial x}$$ AI: If you have a sequence of functions $(f_n)_{n\in \mathbb N}$ that are differentiable, and converge pointwise for some point $x_o$, and if their derivatives converge uniformly, say on a given interval [a,b], supposing they are real valued functions, then the sequence of functions $(f_n)_{n\in \mathbb N}$ is uniformly convergent to $f$ and what more, $$\lim_{n\rightarrow \infty} \frac{\partial f_n(x)}{\partial x} = \frac{\partial f(x)}{\partial x}$$ This is a standard theorem in Analysis. See Walter Rudin's Principle of Mathematical Analysis, 3rd Edition, Theorem $7.17$ for detailed proof.
H: Why $\mbox{Ker }T^{*}\oplus\overline{\mbox{Im }{T}}=X$? Can you explain me or indicate where can I find a proof, please why: $$\mbox{Ker }T^{*}\oplus\overline{\mbox{Im }{T}}=X \mbox{ ? }$$ $X$ is a complex Hilbert space. thanks :) AI: I'm assuming that $T : X \to X$ is a continuous linear operator. Let $x \in \bigl( \mathrm{Im}(T) \bigr)^\bot$, then $\|T^*x\|^2 = \langle T^*x, T^*x \rangle = \langle x, TT^*x \rangle = 0$ and therefore $x \in \mathrm{Ker}(T^*)$. Conversely, if $x \in \mathrm{Ker}(T^*)$, then for all $z \in \mathrm{Im}(T), z = Ty$, $\langle x, z \rangle = \langle T^*x, y\rangle = 0$ and $x \in \bigl(\mathrm{Im}(T)\bigr)^\bot$. So we get that $\bigl(\mathrm{Im}(T)\bigr)^\bot = \mathrm{Ker}(T^*)$. By general properties of Hilbert space, if $F \subset X$ is closed, then $X = F \oplus F^\bot$. Also for any subspace $F$, $F^{\bot\bot} = \bar F$. $\mathrm{Ker}(T^*)$ is closed, as it is the kernel of a continuous operator. Therefore $X = \mathrm{Ker}(T^*) \oplus \mathrm{Ker}(T^*)^\bot = \mathrm{Ker}(T^*) \oplus \bigl( \mathrm{Im}(T) \bigr)^{\bot\bot} = \mathrm{Ker}(T^*) \oplus \overline{\mathrm{Im}(T)}$.
H: Understanding an example of M. L. Wage, W. G. Fleissner, and G. M. Reed In this paper of M. L. Wage, W. G. Fleissner, and G. M. Reed, the authors claimed that having a zeroset diagonal does not guarantee submetrizable by showing Example 2. However, the example is very obscure. The construction of Example 2 is done in a similar manner by considering Heath's "V" space defined on a Q-set. Could somebody help me to explain it? Added by Arthur Fischer. In Heath, R. W., Screenability, pointwise paracompactness, and metrization of Moore spaces, Canad. J. Math. 16 (1964), 763–770, MR0166760, link Heath's "V" space is constructed as follows. Let $X = \{ \langle x , y \rangle \in \mathbb{R} \times \mathbb{R} : y \geq 0 \}$ be the closed upper half-plane, and give it the topology generated by the following basic open neighbourhoods: every $\langle x ,y \rangle \in X$ with $y > 0$ is isolated; for $x \in \mathbb{R}$, given $n \geq 1$ the set $V_{x,n} = \{ \langle\, x+y , |y| \,\rangle : \frac{-1}{n} \leq y \leq \frac{1}{n} \}$ is a basic open neighbourhood of $\langle x , 0 \rangle$. (Note that $V_{x,n}$ is a "$\mathsf{V}$" with vertex $\langle x,0 \rangle$, slopes $\pm 1$ and height $\frac 1n$.) This is a pointwise paracompact Moore space which is not screenable. AI: The construction is given in much more detail in G.M. Reed, ‘On normality and countable paracompactness’, Fundamenta Mathematicae, Vol. $110$, Issue $2$, pp. $145-152$, freely available here.
H: taylor series and uniform convergence Maybe is a silly question but I am confused...so I hope someone can help me. Is the convergence of the Taylor series uniform? To be more specific. We know for example that $\displaystyle{ e^x = \sum_{n=0}^{\infty} \frac{x^n}{ n!} \quad}$ , $\displaystyle{ \sin x = \sum_{n=0}^{\infty} \frac{ (-1)^{n+1} }{ (2n+1)!} x^{2n+1} }$ Now the question is: Does these series of functions converges uniformly to $e^x$ and $\sin x$ respectively? And one more question: Every Taylor series converges (pointiwise) for every $x$ so has radius of convergence $\infty$ right? These questions came up when I was studying and saw I my notes that the lecture interchange the summation and the integration of the Taylor series, for example $\displaystyle{ \frac{1}{ 1+ x^2} = \sum_{n=0}^{\infty} (-1)^n x^{2n}, |x|<1 \implies \arctan x = \int_{0}^{x} \sum_{n=0}^{\infty} (-1)^n t^{2n} dt= \sum_{n=0}^{\infty} (-1)^n \int_{0}^{x} t^{2n} dt }$ and I can understand how can we do this if the convergence in not uniform, which in this example is not since $\displaystyle{ \sum_{n=0}^{\infty} x^n = \frac{1}{1-x} }$ pointwise but not uniform. AI: Hints For your first question: Weirestrass $\;M$-test; For your second question: no. For example, the series of $\,\frac1{1-x}\;$ converges only for $\,|x|<1\,$ , and the convergence is uniform for $\,|x|\leq<1\;$ ...
H: Evaluate $ \int^{ \pi/2}_{- \pi/2} \frac {1}{ 1+e^{\sin x} }dx $ Evaluate $ \int^{\pi/2}_{-\pi/2} \frac {1}{ 1+e^{\sin x} }dx $ Solution: I think odd, even functions are of no use here. Also we get nothing by taking $e^{\sin x} $ common in denominator. Also rationalizing the denominator is of no use. Really I have no idea how to solve this question. Please help. AI: Put $$f(x)=\frac1{1+e^{\sin x}}\implies f(-x)=\frac1{1+e^{\sin(-x)}}=\frac1{1+e^{-\sin x}}=e^{\sin x}f(x)=1-f(x)$$ so that we have $$\int\limits_{-\pi/2}^{\pi/2} f(x)\,dx=\int\limits_{-\pi/2}^0f(x)\,dx+\int\limits_0^{\pi/2}f(x)\,dx=$$ $$\int\limits_0^{\pi/2}f(x)e^{\sin x}dx+\int\limits_0^{\pi/2}f(x)\,dx=\frac{\pi}2$$
H: Prove that $\langle x,y\rangle=x A y^*$ is an inner product In $\mathbb{C}^2$, I want to show that $\langle x,y \rangle =xAy^*$ is an inner product space. I almost done with other properties of inner product but left only one property: $\langle x,x \rangle >0$ if $x \neq 0$. To prove this, $$\begin{align*} \langle x,x \rangle&=(x_1, x_2)A(x_1, x_2)^*\\ &=\|x_1 \|^2-ix_2\overline{x_1}+ix_1\overline{x_2}+\|x_2 \|^2\\ &=\|x_1 \|^2 +2\operatorname{Re}(ix_1\overline{x_2})+\|x_2 \|^2 \end{align*}$$ where $\operatorname{Re}(z)$ is the real part of the complex number $z$. I can't follow the last equality. Why does $\operatorname{Re}(z)$ appear here? AI: For any complex number $w$, we know that $w + \bar w = 2 \Re(w)$. Apply this with $w = ix_1\bar x_2$.
H: An counterexample of Hahn-Banach theorem in a topological vector space Problem : Give an example of a TVS $\mathcal{X}$ that is not locally convex and a subspace $\mathcal{Y}$ of $\mathcal{X}$ such that there is a continuous linear functional $f$ on $\mathcal{Y}$ with no continuous extension to $\mathcal{X}$ I think this problem means that Hahn-Banach theorem ( LCS version ) may not hold in a TVS. But I can't find a counterexample.. AI: Hint: Do you know of a TVS with trivial dual space? Then take any non-zero linear functional on a one-dimensional subspace.
H: Which set theories without the power set axiom are used occasionally? To get a set theory without the power set axiom, I could just take an existing set theory like ZF or ZFC, and remove the power set axiom. However, perhaps I would have to be careful how to formulate the other axioms then, or have to add some sentences that were provable before in the presence of the power set axiom as additional axioms. So if I need a set theory without the power set axiom, it seems wiser to use a theory already investigated in sufficient detail by somebody else. Of course, the theory should be sufficiently "well behaved" so that it is still used at least occasionally. (If ZF or ZFC without the power set axiom should turn out to be such theories, then of course they also qualify as an answer.) AI: Take a look at "What is the theory ZFC without power set?" by Victoria Gitman, Joel David Hamkins, Thomas A. Johnstone, freely available at arxiv: http://arxiv.org/abs/1110.2430 This would seem to give you an excellent (and recent!) starting point for thinking about your question. One thing the paper makes clear is that the issue of what you can prove in a theory without the power set axiom depends on your choice of the remaining axioms (equivalent systems of axioms for set theory, both including the powerset axiom, can become inequivalent when you remove the power set axiom).
H: Representing a nonzero bilinear alternating form on a two-dimensional space by $\bigl(\begin{smallmatrix} 0&1\cr-1&0\end{smallmatrix}\bigr)$. So I am having a little bit struggle with some question. I have a bilenear form $B:V\times V\to \Bbb F$. $B$ is not the zero form. $B$ is alternating meaning it is also skew-symmetric. I know that the $\dim(V)=2$ How can I show that there exists a basis for $V$ such that the matrix represent $B$ looks like that : \begin{pmatrix} 0 & 1\\ -1 & 0\\ \end{pmatrix} I do not really understand how can I show that if I do not know what $B$ does. I am guessing that because of the alternating and the skew-symmetric I can show that the matrix represent $B$ looks like: \begin{pmatrix} 0 & a\\ -a & 0\\ \end{pmatrix} But how can I show that $a=1$? AI: Let $(v,w)$ be any basis of $V$. Since $B$ alternates, $B(v,v)=B(w,w)=0$. Hence $B(v,w)\neq 0$, otherwise $B$ would be the zero form. Now define $w^\prime = \frac{w}{B(v,w)}$. You should be able to show that $B=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$ with respect to the basis $(v,w^\prime)$.
H: compute pca with this useful trick A is matrix (m rows, n cols), each row is an object, and each cols is a feature (a dimension). Typically, I compute the pca based on the covariance matrix, that is A'A, A' is the transposed matrix of A. Today I read a book which presents a useful trick to compute pca, that is if n >> m, then we can compute the eigenvectors of the matrix AA', which might save a lot of memory, here is the code from the book: def pca(X): """ Principal Component Analysis input: X, matrix with training data stored as flattened arrays in rows return: projection matrix (with important dimensions first), variance and mean. """ # get dimensions num_data,dim = X.shape # center data mean_X = X.mean(axis=0) X = X - mean_X # PCA - compact trick used M = dot(X,X.T) # covariance matrix, AA', not the A'A like usual e,EV = linalg.eigh(M) # compute eigenvalues and eigenvectors tmp = dot(X.T,EV).T # this is the compact trick V = tmp[::-1] # reverse since last eigenvectors are the ones we want S = sqrt(e)[::-1] # reverse since eigenvalues are in increasing order for i in range(V.shape[1]): V[:,i] /= S # What for? # return the projection matrix, the variance and the mean return V,S,mean_X Now I understand the algebra behind this useful trick, but there is something confuses me, that is the for-loop, why divide V by S? Normolize the V to unit-length? AI: Yes, this is normalization. Consider that $V$ were obtained from the eigenvectors of $AA^T$. Let $v$ be a unit norm eigenvector for $AA^T$. Since $AA^Tv=\lambda v$, multiplying by $A^T$ on the left we obtain $A^TA(A^Tv)=\lambda(A^T v)$. Thus, $A^Tv$ is an eigenvector for $A^TA$. However it is not a unit vector: multiplication stretches it by $\lambda^{1/2}$. Dividing it by $\lambda^{1/2}$, we get a unit eigenvector for $A^TA$.
H: Calculating interest rate of car financing I want a new car which costs $\$26.000$. But there's an offer to finance the car: Immediate prepayment: $25\%$ of the original price The amount left is financed with a loan: Duration: $5$ years, installment of $\$400$ at the end of every month. So I need to calculate the rate of interest of this loan. Do I need Excel for this exercise? Or which formula could I use for this exercise? AI: You could use Excel (see below) or you could solve the equation $(2)$ below numerically, e.g. using the secant method. We have a so called uniform series of $n=60$ constant installments $m=400$. Let $i$ be the nominal annual interest rate. The interest is compounded monthly, which means that the number of compounding periods per year is $12$. Consequently, the monthly installments $m$ are compounded at the interest rate per month $i/12$. The value of $m$ in the month $k$ is equivalent to the present value $m/(1+i/12)^{k}$. Summing in $k$, from $1$ to $n$, we get a sum that should be equal to $$P=26000-\frac{26000}{4}=19500.$$ This sum is the sum of a geometric progression of $n$ terms, with ratio $1+i/12$ and first term $m/(1+i/12)$. So $$\begin{equation*} P=\sum_{k=1}^{n}\frac{m}{\left( 1+\frac{i}{12}\right) ^{k}}=\frac{m}{1+\frac{ i}{12}}\frac{\left( \frac{1}{1+i/12}\right) ^{n}-1}{\frac{1}{1+i/12}-1}=m \frac{\left( 1+\frac{i}{12}\right) ^{n}-1}{\frac{i}{12}\left( 1+\frac{i}{12} \right) ^{n}}.\tag{1} \end{equation*}$$ The ratio $P/m$ is called the series present-worth factor (uniform series)$^1$. For $P=19500$, $m=400$ and $n=5\times 12=60$ we have: $$\begin{equation*} 19500=400 \frac{\left( 1+\frac{i}{12}\right) ^{60}-1}{\frac{i}{12}\left( 1+\frac{i}{12} \right) ^{60}}.\tag{2} \end{equation*}$$ I solved numerically $(2)$ for $i$ using SWP and got $$ \begin{equation*} i\approx 0.084923\approx 8.49\%.\tag{3} \end{equation*} $$ ADDED. Computation in Excel for the principal $P=19500$ and interest rate $i=0.084923$ computed above. I used a Portuguese version, that's why the decimal values show a comma instead of the decimal point. The Column $k$ is the month ($1\le k\le 60$). The 2nd. column is the amount $P_k$ still to be payed at the beginning of month $k$. The 3rd. column is the interest $P_ki/12$ due to month $k$. The 4th. column is the sum $P_k+P_ki/12$. The 5th column is the installment payed at the end of month $k$. The amount $P_k$ satisfies $$P_{k+1}=P_k+P_ki/12-m.$$ We see that at the end of month $k=60$, $P_{60}+P_{60}i/12=400=m$. The last installment $m=400$ at the end of month $k=60$ balances entirely the remaining debt, which is also $400$. We could find $i$ by trial and error. Start with $i=0.01$ and let the spreadsheet compute the table values, until we have in the last row exactly $P_{60}+P_{60}i/12=400$. -- $^1$ James Riggs, David Bedworwth and Sabah Randdhava, Engineering Economics,McGraw-Hill, 4th. ed., 1996, p.43.
H: why start the taylor series of $\cos^{2} x$ at $k=1$ and not just $k=0$ as I do not understand the problem with $2^{-1}$ Im using $\cos^2 x=\frac{1}{2}(1+\cos(2x))$ and $\cos x = (-1)^k \frac{(2x)^{2k}}{(2k)!}$ to find the sum for the Taylor series of $\cos^2 x$. I thought I was getting it. When I find the answer $$\cos^{2} x =\frac{1}{2} + \sum_{k=0}^\infty (-1)^k \frac{2^{(2k-1)}x^{2k}}{(2k)!}$$ however, my textbook does one more step and starts at $k=1$ instead of $k=0$. Which they make $$\cos^{2} x = 1+\sum_{k=1}^\infty (-1)^k \frac{2^{2k-1}x^{2k}}{{2k}!}$$I understand that they do this to prevent $2^{-1}$ at $k=0$, but i dont understand what would be the problem of $2^{-1}$ as negative powers work fine. I dont understand why this last step is made. Thanks in advance. AI: Now some mathematical corrections and answers. $$\cos (x) = \sum_{k=0}^\infty(-1)^k (x)^{2k}/(2k)!,$$ $$\cos (2x) = \sum_{k=0}^\infty(-1)^k (2x)^{2k}/(2k)!.$$ Hence \begin{align}\cos^2 (x) &=\frac{1}{2}+\frac{1}{2}\cos (2x)=\frac{1}{2}+\frac{1}{2}\sum_{k=0}^\infty(-1)^k (2x)^{2k}/(2k)!=\\&=\frac{1}{2}+\sum_{k=0}^\infty(-1)^k 2^{2k-1}x^{2k}/(2k)!=\\&=1+\sum_{k=1}^\infty(-1)^k 2^{2k-1}x^{2k}/(2k)!. \end{align} In the last step, in order to start the summation at $k=1$ I just evaluated the expression for $k=0$ (and got $1/2$).
H: Definition of period of a decimal representation of a number I need to define the period of a decimal representation of a number!! Thanks in advance!! AI: Period of a decimal representation of a number may be defined in terms of cyclic numbers: See https://en.wikipedia.org/wiki/Cyclic_number#Relation_to_repeating_decimals for fun details
H: If $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$ I'm trying to show that if $V$ is finite dimensional and $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$. I've tried taking a $v$ in the kernel and then since it's in the kernel we know its in the image so there is a $w$ such that $T(w) = v$, so then $TT(w) = 0$, but thats only for a specific $w$? Thanks AI: $$\text{For }v\in V,~Tv\in \text{im}(T)=\ker(T)\implies T(Tv)=0$$
H: How to count unlabeled balls and labeled boxes case? How to count unlabeled balls and labeled boxes case each box can have more than one balls or some box may have no balls use example , 10 labeled boxes, 8 unlabeled balls welcome to use polynomial counting better have closed formula or generating function would like to know Prob(a labelled box have 3 balls) too AI: If I understand your question correctly, if you have n boxes then each ball has a 1/n chance of being in each box. So if you have k balls and you want the probability a box has x balls in it, the answer is given by the binomial distribution: $^kC_x(\frac{1}{n})^x(1-\frac{1}{n})^{k-x}$
H: Calculate the sum $\sum\limits_{n=1}^{\infty}\frac{(-1)^n}{n\times2^{2n+1}}$ I started with $arctg(x) = \sum\limits_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{2n+1}$ Then I differentiated to get rid of the denominator. Then divide with $x$ to get $x^{2n-1}$. Then integrate to get $2n$ in the denominator. Then multiply with 2 to get rid of the 2 in the denominator and finally, multiply with $x$ to get $x^{2n+1}$. Then, with $x=\frac{1}{2}$ I should get the sum I was looking for. Right? Well, I've went through all my steps, did the differentiation a couple of times and confirmed it was correct with Wolfram Alpha but my end result is $-\ln\sqrt{5}$ while Wolfram says it should be $\ln{\frac{2}{\sqrt{5}}}$ What did I do wrong? AI: The sum you used starts at $n=0$ whereas the one you want starts at $n=1$. Also recall that $-\ln(x)=\ln\left(\frac{1}{x}\right)$
H: Any formula to find the maximum $n$th power of $x$ contained in a number? Is there any formula to find the maximum $n$th power of $x$ contained in a number? Say I need to find $n$ where $x$ is $2$ and the number is $25$. So the answer must be $4$. The problem statement is to find $n$ where $x^n \le m$ and $x^{n+1} > m$, where $x,m,n$ are all positive integers. AI: You are asking for the maximum $n$, such that for some base $x$ and some number $y$, we have: $$x^{n}\leq y$$ By definition of the logarithm we have: $$n\leq \log_{x}(y)$$ And as we want the smallest integral value of $n$, we can use the floor function, $\left\lfloor n\right\rfloor$, which is the greatest integer value below $n$. Therefore: $$n=\left\lfloor \log_{x}(y) \right\rfloor$$ So for instance, using your example, where $x=2$ and $y=25$ we have: $$n=\left\lfloor\log_{2}(25)\right\rfloor=\left\lfloor4.6438\dots\right\rfloor=4$$ Which is the answer you expected.
H: Let $A \in M_{nxn} (\mathbb R)$ such that $A^8 + A^2 = I$ prove that $A$ is diagonalizable Let $A \in M_{nxn} (\mathbb R)$ such that $A^8 + A^2 = I$. prove that $A$ is diagonalizable. What I did so far: We know that A is a root of the polynomial $f(x) = x^8 + x^2 -1$. Now we need to figure out what $m_A(x)$ (the minimal polynomial) is. But right there is what I had trouble with. Any direction will be appreciated! AI: Let $f(x)=x^8+x^2-1$. Then $f'(x)=8x^7+2x$. If we compute the greatest common divisor of $f$ and $f'$ (say using the Euclidean algorithm http://www.wolframalpha.com/input/?i=gcd+x%5E8%2Bx%5E2-1%2C+8x%5E7%2B2x ), we'll find that they are relatively prime. This means $f$ has no repeated roots. The minimal polynomial of $A$ must divide $f$ (which has no repeated roots). Therefore, the minimal polynomial of $A$ has no repeated roots (and thus factors into distinct linear factors over $\mathbb{C}$). Thus $A$ is diagonalizable (over $\mathbb{C}$).
H: Calculus - Indefinite integration Find $\int \sqrt{\cot x} +\sqrt{\tan x}\,dx$ Problem : Find $\int \sqrt{\cot x} +\sqrt{ \tan x}\,dx$ My Working : Let $I_1 = \int \sqrt{\cot x}\,dx$ and $I_2 = \int \sqrt{\tan x}\,dx$ By using integration by parts: Therefore , $I_1 = \sqrt{\cot x}.\int1 dx - \int\{(d\sqrt{\cot x}\int 1.dx\}$ $\Rightarrow I_1= x\sqrt{\cot x} + 2x\sqrt{\cot x}-2\int \sqrt{\cot x}\,dx$ $\Rightarrow I_1= x\sqrt{\cot x} + 2x\sqrt{\cot x}-2 I_1$ $\Rightarrow 3I_1= x\sqrt{\cot x} + 2x\sqrt{\cot x} = 3x\sqrt{\cot x} $ $\Rightarrow I_1 = x\sqrt{\cot x}$ Similarly we can find $I_2 = \int\sqrt{\tan x}\,dx$ $I_2 = x\sqrt{\tan x}$ Please suggest whether this is wrong or correct... AI: HINT: $$\sqrt{\cot x}+\sqrt{\tan x}=\frac{\sin x+\cos x}{\sqrt{\sin x\cos x}}$$ Now as $\int (\sin x+\cos x)dx=-\cos x+\sin x$ and $(-\cos x+\sin x)^2=1-2\sin x\cos x\implies \sin x\cos x=\frac{1-(-\cos x+\sin x)^2}2$ So, $$\sqrt{\cot x}+\sqrt{\tan x}=\frac{\sin x+\cos x}{\sqrt{\sin x\cos x}}=\sqrt2\frac{\sin x+\cos x}{\sqrt{1-(-\cos x+\sin x)^2}}$$ Put $-\cos x+\sin x=u$
H: Comparing fields with same degree Two part question: Are the fields $\mathbb{Q} (\sqrt[3]{2}, i \sqrt{3})$ and $\mathbb{Q} (\sqrt[3]{2}, i, \sqrt{3})$ identical in algebraic structure? I have in notes that they both have degree of 6 over $\mathbb{Q}$. How do I show explicitly that $\mathbb{Q} ( i \sqrt{3})$ is only degree 2 over $\mathbb{Q}$. The usual trick is to adjoin the real roots and then adjoin the complex root, but it's a different story when it's not just $i$ by itself. Edit: I'm starting to mistrust that the degree of the two extensions are identical. AI: For starters, neither $i$ nor $\sqrt{3}$ is in $\mathbb{Q}(\sqrt[3]{2},i\sqrt{3})$. And I think you're right to mistrust that the degrees are equal since $|\mathbb{Q}(\sqrt[3]{2},i,\sqrt{3})/\mathbb{Q}|=12$.
H: Eigenvectors of real normal endomorphism A normal endomorphism that has a matrix with only real entries over a complex vector space, has pairwise always pairwise eigenvalues, meaning that we have an eigenvalue and its complex conjugate. now i was wondering whether this statement is also true for eigenvectors. therefore the question is: if we have $\lambda$ and $\bar{\lambda}$ as eigenvalues, do we also have eigenvectors $v$ and $\bar{v}$, where v belongs to $\lambda$ and $\bar{v}$ belongs to $\bar{\lambda}$? AI: The normality condition is irrelevant. If $\lambda$ is a real eigenvalue of the matrix $A$, take an eigenvector $\mathbf{v}$ and write it as $\mathbf{a}+i\mathbf{b}$, where $\mathbf{a}$ and $\mathbf{b}$ are vectors with real coefficients. Then $$ A\mathbf{v}=A\mathbf{a}+iA\mathbf{b} $$ so $$ \lambda\mathbf{a}+i\lambda\mathbf{b}=A\mathbf{a}+iA\mathbf{b} $$ and equating real and imaginary parts you get $$ A\mathbf{a}=\lambda\mathbf{a},\qquad A\mathbf{b}=\lambda\mathbf{b} $$ so you find a "real" eigenvector, because one among $\mathbf{a}$ and $\mathbf{b}$ must be non zero. If $\lambda$ is not real, you can apply conjugation: if $\mathbf{v}$ is an eigenvector, then $A\mathbf{v}=\lambda\mathbf{v}$, so also $$ A\bar{\mathbf{v}}=\bar{\lambda}\bar{\mathbf{v}} $$ This shows also that the map $\mathbf{v}\mapsto\bar{\mathbf{v}}$ is a bijection between the eigenspaces relative to $\lambda$ and $\bar{\lambda}$. It's not a linear map, but easy considerations show that the two eigenspaces have the same dimension (a linear dependency relation in one space translates into a linear dependency relation in the other, with conjugate coefficients).
H: Harmonic number divided by n How do I prove that $\dfrac{H_n}{n}$ (where $H_n$ is a harmonic number) converges to $0$, as $n \to \infty$? AI: Let $1\leqslant k\leqslant n$. Using the upper bounds $\frac1i\leqslant1$ for every $i\leqslant k$ and $\frac1i\leqslant\frac1k$ for every $k\lt i\leqslant n$ yields $H_n\leqslant k+\frac1k(n-k)=k-1+\frac{n}k$. Choose $k$ such that $\sqrt{n}\leqslant k\leqslant \sqrt{n}+1$. Then $k-1\leqslant\sqrt{n}$ and $\frac1k\leqslant\frac1{\sqrt{n}}$ hence $H_n\leqslant2\sqrt{n}$. This is valid as soon as $\sqrt{n}\leqslant n$, that is, for every $n\geqslant1$. Finally, for every $n\geqslant1$, $\frac{H_n}n\leqslant\frac2{\sqrt{n}}$, in particular $\frac{H_n}n\to0$ when $n\to\infty$.
H: Find $ \int {dt\over 2t+1}$ a simple question, but I'm stuck anyway: How to integrate this: $$ \int {dt\over 2t+1} = ? $$ Is it simply: $\ln|2t +1| $ or do I need Chain rule like: $$\ln|2t +1| \cdot \frac{d}{dt}(2t + 1) $$ AI: You've got the right idea about the "form" of the integral, but recall, we need to account for the chain-rule before integrating, by u-substitution for example. In general, when you have an integral of the form $\int \dfrac{f'(t)}{f(t)} \,dt$ your result will indeed be: $$\int \dfrac{f'(t)}{f(t)} \,dx = \ln|f(t)| + C$$ In your case, we have $f(t) = u =2t + 1$. Now, we need $f'(t):\; du = 2 dt\iff \frac 12 du = dt$, so we need to obtain the form $$\int \frac{f'(t)}{f(t)}\,dt$$ which we can obtain directly, or through substitution: $$\int \frac{dt}{2t + 1} = \frac 12 \int \underbrace{\frac{2\,dt}{2t + 1}}_{\dfrac{f't}{f(t)} dt} \quad \overset{\text{substitute}}{=} \quad \frac 12 \int \frac{du}{u} \quad = \quad \frac 12\ln|u| + C = \cdots$$
H: Integration of $\int(2-x/2)^2dx$ Got an exam tomorrow and my head is no longer working. Could someone walk through the integration of this function $$\int\left(2-\frac x2\right)^2dx$$ I understand integration by parts and stuff like that. AI: Hint: $\displaystyle \int u'(x)(u(x))^2dx=\dfrac{(u(x))^3}{3}+C$
H: Not Equivalent Interpretation Can someone provide an interpretation to show that the following are not equivalent: $$\forall x \in D, P(x) \vee Q(x)\;\;\text{vs.}\;\;(\forall x \in D, P(x)) \vee (\forall x \in D, Q(x))$$ They seem equivalent. AI: Let $D=\Bbb R$ and $P(x)$ mean $x\ge 0$. Can you find a suitable property $Q(x)$?
H: find the limit $\lim \limits_{x \to {\pi/2}} \frac {\sin x -(\sin x)^{\sin x}} {1-\sin x+\log (\sin x)}$ $\displaystyle \lim\limits_{x \to {\pi/2}} \frac {\sin x -(\sin x)^{\sin x}} {1-\sin x+\log (\sin x)}$ Solution : We can solve this question by L' Hospital rule But it will be a bit tedious Is there any other easy method to solve this question ?? AI: Set $y=\sin x$. Then you have $$\lim_{y\rightarrow 1}\frac{y-y^y}{1-y+\ln y}=\left(\lim_{y\rightarrow 1}y\right) \left(\lim_{y\rightarrow 1}\frac{1-y^{y-1}}{1-y+\ln y}\right)$$ This is now an easier L'Hopital's rule application.
H: A markov chain inequality in Billingsley that should be an equation? In the section on Markov chains in Billingsley's Probability and Measure (3e) we have the following inequality on page 120 in the proof of Theorem 8.3, $$ \begin{align*} p_{ji}^{(m)} &= P_j([X_m=i] \cap [X_n = j \text{ i.o.}])\\ &\le \sum_{n>m} P_j (X_m = i, X_{m+1} \ne j,\ldots, X_{n-1} \ne j,X_n=j) \end{align*} $$ [Notation: $p_{ji}^{(m)} = P(X_m =i \mid X_0=j)$, $P_j(A) = P(A \mid X_0=j)$.] My question is regarding the inequality sign on the second line. $X_n=j$ infinitely often if and only if $X_n=j$ infinitely often for $n>m$. Each such path must hit $j$ for the first time after period $m$ at some finite time $n$. So it seems to me that the events on right-hand side of the second line form a partition of the event on right-hand side of the first line and therefore we must have an equality here. Am I going wrong somewhere? AI: The sum on the second line computes the probability that $X_n$ hits $j$ at least once. If there is a positive probability that $X_n$ hits $j$ at least once but not infinitely often (e.g. if $j$ is transient), then the inequality is strict.
H: Is my understanding of the solution for $a$ in $\sum^\infty_{n=0} e^{na} = 2$ correct? This was a homework problem that I got and I was required to find the value of $a$ in $$\sum^\infty_{n=0} e^{na} = 2$$ So I did the following. I removed the $\sum$ and along with the indexer $n$ and was left with $e^a = 2$ where the solution to $a$ is easy to find ($a = \ln 2$). Then I know that $e \gt 2$ then the value has to be decreasing so I added a negative to $\ln 2$ which gave me the answer ($-\ln 2$). I also noticed that this the solution gave the familiar series $$1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} \ldots$$ which equals to $2$ so I know that my answer is correct but what is the actual/systematic way of solving series like this? AI: Call $e^a = x$. We then have $$\sum_{n=0}^{\infty} x^n = 2 \implies \dfrac1{1-x} = 2 \implies x = \dfrac12 \implies a = - \ln(2)$$ Note that $x = e^a = e^{-\ln(2)} = \dfrac12$. Hence, $$\sum_{n=0}^{\infty} e^{na} = \sum_{n=0}^{\infty} \dfrac1{2^n} = 2$$
H: cardinality of sets Prove if $ |A| < |B| $ and $ |B| \leq |C|$ then $ |A| < |C|$ I know that $|A| < |B|$ means there is a one to one mapping of A onto a SUBSET of B but no one to one mapping from A to B. I also know that $|B| \leq |C|$ means there is a one to one mapping of B onto C. I am going to try to prove this, but I am fairly confident its going to be full of holes, please help! since $|A| < |B|$ there is $Z \subseteq B$ such that $f(a)=z$ for all $a \in A$ and $z\in Z$ since $|B| \leq |C|$ we have $f(b)=c$ for all $b \in B$ and $c\in C$ since $Z \subseteq B$ we have $f(z)=y$ for all $z \in Z$ and $y \in Y$ where $Y$ is a subset of $C$ since $f$ is one to one, then the inverse exists. therefore $z = f^{-1}(y)$ for $z \in Z$ and $y \in Y$ therefore $f(a) = f^{-1}(y)$ okay now i'm lost. Please help! AI: First, let's clarify something: The notation $|A|<|B|$ means that: There is a 1-1 function from $A$ into $B$. There is no bijection from $A$ onto $B$. Whereas the notation $|A|\leq|B|$ means only the first one. It may be that the second condition holds, or that it fails. For example $|\{0\}|\leq|\{0,1\}|$, and also $|\{1,2\}|\leq|\{0,1\}|$. To show now that $|A|<|C|$ use the fact that there are two injections, $f\colon A\to B$ and $g\colon B\to C$ to come up with an injection $h\colon A\to C$. Next show that if there was a bijection from $A$ onto $C$ then there had to be one onto $B$ as well, which is absurd.
H: Given $T(A) = A^t$ in $M_{n\times n}(\mathbb R)$. Find the polynomials and find if it's diagonalizable Given the vector space $M_{n\times n} (\mathbb R)$ and a transformation $T(A) = A^t$ (transpose): Find $m_T$, $P_T$ (the minimum polynomial and the characteristic polynomial respectively.) Find if $T$ is diagonalizable, if so, find a diagonalization basis and the representation matrix in that basis. But how can I exactly find what $T$ does on $A^t$ if the basis has $n$ vectors? Any help will be much appreciated! AI: We have $T\circ T=\mathrm{Id}$ hence the polynomial $x^2-1$ which has simple roots $-1$ and $1$ annihilates $T$ hence $T$ is diagonalizable and since $T\neq \pm \mathrm{Id}$ then $x^2-1$ is the minimal polynomial of $T$, moreover since $$M_n(\mathbb{R})=S_n(\mathbb{R})\oplus A_n(\mathbb{R})$$ then the matrix of $T$ in a basis correspondant to this decomposition is $\mathrm{diag}(1,\ldots,1,-1,\ldots,-1)$ with $\frac{n(n+1)}{2}=\dim S_n(\mathbb{R})$ eigenvalues $1$ and $\frac{n(n-1)}{2}=\dim A_n(\mathbb{R})$ eigenvalues $-1$ hence the characteristic polynomial is $$\chi_T(x)=(x-1)^{\frac{n(n+1)}{2}}(x+1)^{\frac{n(n-1)}{2}}$$
H: Weak convergence in $L^2$ and CDF Assume that for sequence $X_n \in L^2(\Omega,F,P)$ which converges in distribution to CDF $F_X$ ($F_n(t)\rightarrow F_X(t)$ for every point of continuity of $F_X$), we have also that $X_n$ converges weakly to $Z$ in $L^2(\Omega,F,P)$ $(\mathbb{E}X_nY \rightarrow \mathbb{E}ZY$ for every $Y \in L^2(\Omega,F,P))$. Does it mean that CDF of $Z$ is equal to $F_X$? AI: That is not true. Take $\{ X_n \}$ to be a white-noise process that is i.i.d. normal with mean $0$ and variance $1$. Then the sequence converges in distribution trivially. But $\{X_n \}$ is an orthonormal sequence in $L^2$. So it converges to $0 \in L^2$ weakly. The CDF of $0$ is evidently not the normal distribution.
H: integrating by parts $ \int (x^2+2x)\cos(x)\,dx$ I seems to be stumped in this integral by parts problem. I have $$ \int (x^2+2x) \cos(x)\,dx $$ step 1- pick my $u , dv, du, v$ $$u=x^2+2x$$ $$du=(2x+2) \,dx$$ $$dv = \cos(x)$$ $$v= \sin(x) $$ step 2- apply my formula $uv - \int v \,du$ $$(x^2+2x)(\sin(x))-\int(\sin(x)(2x+2) \,dx$$ step 3 solve my integral(i think this is where im screwin up) note: just working with the right hand side of the formula. distribute the $2x+2$ to my $\sin(x)$ $$\int 2x\sin(x) + 2\sin x$$ factor out a $2$ $$-2\int x \sin(x)+\sin(x)\,dx$$ take the integral of $\sin(x)$, $-\cos(x)$ so thus far i would have $$(x^2+2x)(\sin(x))-2-\cos(x) - \int x\sin(x) \, dx$$ by using a simple substitution for the last integrand i would end up with $-\cos(x)$ so my final result is $$(x^2+2x)(\sin(x))-2-\cos(x)-\cos(x) $$ this is not the answer but can someone spot where i went wrong I simply cant see my mistake or "MISTAKES". Thanks in advance. Miguel AI: The first integration by parts went fine. We now need $\int(2x+2)\sin x\,dx$. Use integration by parts again, $u=2x+2$, $dv=\sin x\,dx$. Remark: In the OP, there is some casualness with notation. Such casualness often comes at a cost. Unpleasantly fussy people like me take off marks. And the probability of coming up with wrong answers increases. There is a pattern in the integration by parts of things like $\int(x^2+2x)\cos x\,dx$, or $\int x^3 e^{-7x}\,dx$. One integratio by parts reduces the quadratic $x^2+2x$ to the linear $2x+2$. The next integration by parts reduces the linear $2x+2$ to the harmless constant $2$. Similarly, for $\int x^3 e^{-7x}\,dx$ it will take three integrations by parts to do the calculation.
H: percentage problem for salary In $\;2002,\, 2003, \, \text{and}\; 2004\;$ the total income of Jerry was $\$36,400$. His income increased by $15\%$ each year. What was his income in $2004$? Any hints or solution will be welcome. Thanks in advance. AI: $ \begin{align} I_0 & : \quad \text{income earned in}\;2002.\tag{1} \\ \\ I_1 & = 1.15 I_0:\quad\text{income earned in}\; 2003.\tag{2} \\ \\ I_2 & = 1.15 I_1 = 1.15^2 I_0:\quad \text{income earned in} \; 2004.\tag{3} \end{align} $ $$\text{Income over $3$ years}:\;\;I_0 + I_1 + I_2 = \$36{,}400$$ $$ \iff I_0 + 1.15 I_0 + 1.15^2 = I_0\underbrace{(1 + 1.15 + 1.15^2)}_{\text{sum}\; =\; 3.4725} = 36400 $$ Solve for $I_0$, the income earned in $2002$, and then compute $I_2$ (income earned in $2004),\,$ using your computed solution for $I_0$ and the relation given by $(3)$ above.
H: Every principal ideal domain satisfies ACCP. Every principal ideal domain $D$ satisfies ACCP (ascending chain condition on principal ideals) Proof. Let $(a_1) ⊆ (a_2) ⊆ (a_3) ⊆ · · ·$ be a chain of principal ideals in $D$. It can be easily verified that $I = \displaystyle{∪_{i∈N} (a_i)}$ is an ideal of $D$. Since $D$ is a PID, there exists an element $a ∈ D$ such that $ I = (a)$. Hence, $a ∈ (a_n)$ for some positive integer $n$. Then $I ⊆ (a_n) ⊆ I$. Therefore, $I = a_n$. For $t ≥ n$, $(a_t) ⊆ I = (a_n) ⊆ (a_t)$. Thus, $(a_n) = (a_t)$ for all $t ≥ n$. I have prove $I$ is an ideal in the following way:- Let $ x,y\in I$. Then there exist $i,j \in \mathbb{N}$ s.t. $x \in (a_i)$ & $y \in (a_j)$. Let $k \in \mathbb{N}$ s.t $k>i,j$. Then $x \in (a_k)$ & $y \in (a_k)$. as $(a_k)$ is an ideal $x-y \in (a_k)\subset I$ and $rx,xr \in (a_k)\subset I$. So $I$ is an ideal. Is it correct? AI: Your proof is right but you can let t = max(i,j) and any k > t.
H: How many ways are there to represent the number $N$? I was given a task that doesn't require any special knowledge of math, but got stuck with it. Here it is: How many ways are there to represent the number $N$ in the following way: $$ N = a_3 \cdot 10^3 + a_2 \cdot 10^2 + a_1\cdot10+a_0 \ \ \ (1)$$ $$ a \in \mathbb{Z_{\geq0}}, \ \ \ \ 0\leq a_i\leq99, \ \ i=0;1;2;3$$ for $N=1091$? Do 10 different numbers $N$ that are representable exactly in 110 ways as in the $(1)$ exist? How many numbers $N$ that are representable as in the $(1)$ are representable exactly in 110 ways? I've written a program and found out that the answer for the first question is 110. But I have no any more ideas unfortunately. Any ideas or hints leading to an analytical solution are greatly appreciated! AI: Hint: write each $a_i$ as $10b_i+c_i$ with $b_i,c_i\in \{0,1,...,9\}$ (and use the uniqueness of decimal representation). Elaboration: if you write it out like this, then you can do the following analysis for $N=10^4d_4+10^3d_3+10^2d_2+10d_1+d_0$. Write $N=10^4b_3+10^3(c_3+b_2)+10^2(c_2+b_1)+10(c_1+b_0)+c_0$. Notice that you can possibly have carry-overs from $10$ to $10^2$, from $10^2$ to $10^3$ and from $10^3$ to $10^4$. There are exactly $(d_1+1)(d_2+1)(d_3+1)$ ways to represent $N$ with no carry-overs. With a carry-over from $10$ to $10^2$ (and no other carry-overs), there are $(9-d_1)(d_2)(d_3+1)$ ways. And so on. For $1091$, the only carryover that can occur is from $10^2$ to $10^3$. So we have in total $10\cdot 1\cdot 2+10\cdot9\cdot1=110$ ways. I don't think there's any smarter way to work out second and third point than either writing a program or writing out the formula behind "And so on." and doing some more or less rough estimates.
H: Limit in a sequence I know that $\lim\sqrt[n]{a}=1$(where $a > 0$ is a real number). I know also that $\lim{\frac{1}{n}}=0$. But, can you explain me why $\lim\sqrt[n]{2 + \frac{1}{n}}= 1$ ? AI: Since $2<2+\frac1n\le 3$, yuo can compare your sequence with $\sqrt[n]2$ and $\sqrt[n]3$.
H: Integer Linear Programming (ILP): NP-hard vs. NP-complete? I was thinking about examples where a problem is NP-hard but was not NP-complete and ILP came to mind. It is obviously NP-hard but is it NP-complete? I.e., is it in NP? Given a certificate (the alleged minimum), we certainly cannot check if it is the minimum in polynomial time. Therefore, I don't see how we can convert this into a decision problem. So is ILP NP-hard but not NP-complete? Also feel free to include examples (apart from the halting problem) of problems which are NP-hard but not NP-complete. AI: It depends on exactly what you mean by ILP. The standard decision problem formulation of Integer Programming is "there exists an integer solution with objective less than $k$". This is NP complete (google is your friend). If you have some different decision problem, it may or may not be, you have to specify the problem before we can tell.
H: Pullback of a coproduct in an Abelian category is 0? Let $i_P:P\rightarrow P\oplus Q$ and $i_Q:Q\rightarrow P\oplus Q$ be a coproduct in an abelain category $\mathcal A$, let $t:T\rightarrow P$,$u:T\rightarrow Q$ be arrows such that $i_Pt=i_Qu$, I have to prove $t=u=0$. The only thing I came up with was to show that $0$; the initial and terminal pbject of the category, is the pullback of the arrows $i_P$ and $i_Q$, I'm sure this must be true, but how can I prove it? AI: Look at the projection maps $p_P:P\oplus Q \rightarrow P$ and $p_Q:P\oplus Q \rightarrow Q$ (those associated to the product structure of $P\oplus Q$). We have $p_Pi_P = id_P$ and $p_Pi_Q = 0$. (we have similar properties for $P$ and $Q$ switched.) Now just apply this to $i_Pt = i_Qu$.
H: Existence a certain subgroup of a group ‎Let ‎‎$G$ ‎be a‎ ‎finite group ‎such ‎that ‎‎$G=P\rtimes Q‎‎‎‎$ ‎where ‎‎$P\in {\rm Syl}_p(G)‎‎$;‎ ‎‎$‎‎P\cong \Bbb{Z}_p\times \Bbb{Z}_p‎$ ‎‎‎‎ and ‎$Q\in {\rm Syl}_q(G)$; ‎$‎‎|Q|=q$ (‎$‎‎p, q$ ‎are primes‎)‎‎. Can we classify groups ‎$‎‎G$ ‎‎which‎‎‎ ‎contain a‎ ‎subgroup ‎of ‎order ‎‎$‎‎pq$? ‎ ‎(We know that the alternating group ‎$A_4‎‎$ ‎has no subgroup of order 6‎).‎ Thank you for your thoughts on this! AI: Yes. If $q$ does not divide $p-1$, there is 1 such group, the abelian group. If $q=2$, then there are 3 such groups. Otherwise there are $3+(q-1)/2$ such groups up to isomorphism. The gist is that $P$ is a two-dimensional $Q$-module, and a subgroup of order $pq$ corresponds to a one-dimensional submodule. Since $p \neq q$ by your Sylow assumptions, $P$ is completely reducible, so is a direct sum of two one-dimensional representations. If $q$ does not divide $p-1$, then the only one-dimensional representation is the trivial representation, and so the only such $G$ is $P \times Q$, an abelian group. If $q=2$, then there are two such representations, one where the element $x$ of order 2 acts as the identity, $x^{-1} y x = y$ for $y \in P_0$, and one where $x$ acts as inversion, $x^{-1} y x = y^{-1}$ for $y \in P_1$. $P$ itself is either the direct sum of two copies of $P_0$, two copies of $P_1$, or one copy of each. Three distinct groups $G$. If $q$ divides $p-1$ and is odd, then we get $q$ distinct representations $P_0, P_1, \dots P_{q-1}$. A semi-direct product $(P_i \oplus P_j) \rtimes Q$ is isomorphic to a semi-direct product $(P_{ik} \oplus P_{jk}) \rtimes Q$ whenever $k$ is relatively prime to $k$ (all indices taken mod $q$). Hence we get three sort of special examples: $(P_0 \oplus P_0) \rtimes Q$, $(P_0 \oplus P_i) \rtimes Q$, and $(P_i \oplus P_i) \rtimes Q$ (where different nonzero $i$ all give the same group), and then $(q-1)/2$ others all of the form $(P_1 \oplus P_i) \rtimes Q$, where the $q-1$ part of the count comes from letting $i$ be any nonzero index, and the $(q-1)/2$ is from realizing $(P_1 \oplus P_i) \rtimes Q \cong (P_i \oplus P_1 ) \rtimes Q \cong (P_1 \oplus P_{1/i}) \rtimes Q$.
H: Representation of matrix A = BC This is straight from a textbook (Cullen): Show that every $m \times n$ matrix $A$ of rank $r$ has a representation as $A = BC$, where $B$ is an $m \times r$ matrix whose columns are the first $r$ linearly independent columns of $A$ and where $C$ is an $r \times n$ matrix in row-reduced echelon form. How is $C$ related to the row-reduced echelon matrix row equivalent to $A$? From a theorem I know that if $A_{n \times n}$ is nonsingular and $PAQ = I$, then $A^{-1} = QP$. But this only works for nonsingular matrices. Can this idea be generalised or am I barking up the wrong tree here? AI: Let, $A=\begin{bmatrix} A_1&A_2&A_3&\dots&A_n \end{bmatrix}$ Here $A_i(1\le i\le n) $ are the columns of $A$. Let $A_{i_{k}}(1\le k\le r)$ be the first r independent columns of A. Let $B=\begin{bmatrix} A_{i_1}&A_{i_2}&A_{i_3}&\dots&A_{i_r} \end{bmatrix}$ Let $C=\begin{bmatrix} C_{1}&C_{2}&C_{3}&\dots&C_{n} \end{bmatrix}$ ($C_i$ are the columns of C) $A_1=A_{i_1}$(As we can always select a set of $r$ independent vectors from a set of $n$ vectors spanning a $r$ dimensional space such that a particular vector $v$ is always there). Now I will show that for some if $BC$ equals to $A$ then $C$ will be in its row reduced echelon form. We must have $C_1=\begin{bmatrix} 1&0&0&\dots&0 \end{bmatrix}^t$ $A_2$ is either $kA_1$ for some $k\in F$ or $A_2$ and $A_1$ is linearly independent in which case $A_2=A_{i_2}$(Because $A_{i_2} \in$ set of first $r$ independent columns of $A$) With similar arguement one can establish that $A_{p}=\sum_{j=1}^{k}a_jA_{i_j}$(with some $a_j\ne 0$) or $A_p=A_{i_{k+1}}$. Here $\{A_{1_1},A_{i_2}\dots A_{i_k}\}$ is the minimum spanning subset of $\{A_1,A_2,\dots , A_{p-1} \}$(By a minimum spanning subset $S$ of $T$ I mean that $T\subseteq \text{span} (S)$ , $S\subseteq \{A_{1_1},A_{i_2}\dots A_{i_r}\}$ and there is no $W\subset S$ such that $T\subseteq \text{span} (W)$). From this it easily follows that $C$ is Row reduced echelon form.
H: Correlation Coefficient - $\rho(X,Y)$. If I have two aleatory variables $$X=\begin{pmatrix}2&3&4&5&6&7&8&9&10&11&12\\ \frac{1}{36}&\frac{2}{36}&\frac{3}{36}&\frac{4}{36}&\frac{5}{36}&\frac{6}{36}&\frac{5}{36}&\frac{4}{36}&\frac{3}{36}&\frac{2}{36}&\frac{1}{36}\end{pmatrix}$$ and $$Y=\begin{pmatrix}2&3&4&5&6&7&8&9&10&11&12\\ \frac{1}{36}&\frac{2}{36}&\frac{3}{36}&\frac{4}{36}&\frac{5}{36}&\frac{6}{36}&\frac{5}{36}&\frac{4}{36}&\frac{3}{36}&\frac{2}{36}&\frac{1}{36}\end{pmatrix}$$ how can I calculate $$\rho(X,Y) \mbox{?}$$ $\rho$ -correlation coefficeint. thanks:) AI: You cannot find the correlation without additional information. $X$ and $Y$ could be independent, in which case the correlation is $0$, or (in this instance) $X$ might always be equal to $Y$ so the correlation would be $1$, or $X$ might (again, in this instance) be equal to $14-Y$ (as $X$ goes from $2$ to $12$, then $14-X$ goes from $12$ to $2$), in which case the correlation is $-1$, or they could be related in more complicated ways, in which case the correlation could have any of a large (but in this case finite) number of possible other values between $1$ and $-1$. Since the number of possible values of the correlation is in this case finite, one could say that there's enough information given to deduce at least something about the correlation.
H: Prove or disprove: if $f$ and $fg$ are continuous then $g$ is continuous. Prove of provide a counterexample: Suppose that $f$ and $g$ are defined and finite valued on an open interval $I$ which contains $a$, that $f$ is continuous at $a$, and that $f(a)\neq 0$. Then $g$ is continuous at $a$ if and only if $fg$ is continuous at $a$. I don't suppose it's true, based on the fact that the common theorem '$f, g$ continuous implies $fg$ continuous' is not stated as true both ways; obviously, this implies exceptions. The only ones I can think of, however, are one's that don't fit the "open interval" or "$f(a)\neq 0$" parts, or ones where both f and g are discontinuous. I've also tried proving it, but with no luck. Help? :-S AI: $f$ is continuous and nonzero at $a$, hence $1/f$ is continuous at $a$. Since $fg$ is continuous at $a$ as well, so is their product $(fg)(1/f)=g$. (the other direction doesn't need a trick).
H: Limit of a Sequence involving cubic root I succeed in finding the following limit applying binomials and squeeze theorem: $$\lim(\sqrt{n+1} - \sqrt{n}) = \lim\frac{1}{\sqrt{n+1} + \sqrt{n}} = 0$$ because $0 \leq \frac{1}{\sqrt{n+1} + \sqrt{n}} \leq \frac{1}{\sqrt{n}}$ But I need help because I'm not finding any way to simplify and solve the following limit: $$\lim(\sqrt[3]{1-n^3} + n)$$ AI: Hint: $$(a^3+b^3)=(a+b)(a^2-ab+b^2) $$
H: Isomorphisms between Orderings If $h$ is an isomorphism between $(P,<)$ and $(Q,\prec)$ then show $h^{-1}$ is an isomorphism between $(Q,\prec)$ and $(P,<)$ DEFINITION: $h$ is an isomorphism between $(P,<)$ and $(Q,\prec)$ then $h(p_{1}) \prec h(p_{2})$ whenever $p_{1}<p_{2}$. * I have this definition from definition 5.17 of Hrbacek and Jech. how do I go about the inverse function? This is what I have so far: consider $p_{1}$ and $p_{2} \in P$ such that $p_{1} < p_{2}$ then we know by definition of $h$ that $h(p_{1}) \prec h(p_{2})$ can I conclude from there that $h^{-1}(h(p_{1})) < h^{-1}(h(p_{2}))$ ? let $h(p_{1})=q_{1}$ and $h(p_{2})=q_{2}$ where $q_{1} , q_{2} \in Q$ then $h^{-1}(h(p_{1}))=h^{-1}(q_{1})$ and $h^{-1}(h(p_{2}))=h^{-1}(q_{2})$ Please help. AI: HINT: In order to show that $h^{-1}$ satisfies the definition of an isomorphism from $\langle Q,\prec\rangle$ to $\langle P,<\rangle$, you must show that $h^{-1}(q_1)<h^{-1}(q_2)$ if and only if (not just whenever) $q_1\prec q_2$. Thus, you should be starting with arbitrary $q_1,q_2\in Q$ such that $q_1\prec q_2$, not with elements of $P$. Suppose that $q_1,q_2\in Q$ and $q_1\prec q_2$. Since $h$ is an isomorphism, there are $p_1,p_2\in P$ such that $h(p_1)=q_1$ and $h(p_2)=q_2$; why? Clearly $p_1=h^{-1}(q_1)$ and $p_2=h^{-1}(q_2)$. Now, knowing that $q_1\prec q_2$, what can you say about the relative order of $p_1$ and $p_2$ in $P$? Is it possible that $p_1=p_2$? Why? Is it possible that $p_2<p_1$? Why? And finally, is it possible that $p_1$ and $p_2$ are not comparable? Why? (All of those Why? questions have essentially the same answer.)
H: Question about normal subgroup and isomorphism relation Maybe by using the following theorem: If $N$ is a normal subgroup of $G$, then the function $\phi: G \to G/N,$ given by $\phi(g)=gN$ yields a surjective homomorphism from $G$ to $G/N$ with $\ker(\phi)=N$. I want to solve the following exercise: $G=$ the group of all matrices of the form $ \pmatrix{1 & a \\ 0 & b }$ where $b \neq 0$. $N$ is the group of all matrices of the form $\pmatrix{1 & c \\ 0 & 1 }$, where $c \in \mathbb{R}$. It's a normal subgroup of $G$, because $gng^{-1} \in N$ for all $g\in G,n\in N$. Now I have to show that $\mathbb{R} \setminus \{0 \}$ and $G/N$ are isomorphic. I should do this by showing that $\phi(b) \mapsto \pmatrix{1 & 0 \\ 0 & b } N$ is an isomorphism So we have a $b\in \mathbb{R}\setminus \{0 \}$ is an isomorphism... AI: Hints: remember the First Isomorphism Theorem: you could define $$\phi:G\to\Bbb R-\{0\}:=\Bbb R^*\;,\;\;\phi\begin{pmatrix}1&a\\0&b\end{pmatrix}:=b$$ Prove the above is a group homomorphism between $\,G\,$ and $\,\Bbb R^*\,$ whose kernel is $\,N\,$ and now apply the FIT...
H: Did I derive this correctly? I derived this $$(2x+1)^2 \sqrt{4x+1}$$ and got $(8x+4)(\sqrt{4x+1})$+$\frac{2}{\sqrt{4x+1}}(2x+1)^2$ Is this correct? I ask because Wofram Alpha gave me a different answer. Thanks in advance. AI: Your answer: $$\begin{aligned} &(8x+4)(\sqrt{4x+1})+\frac{2}{\sqrt{4x+1}}(2x+1)^2 \\ =& \frac{(8x+4)(4x+1)}{\sqrt{4x+1}}+\frac{2(2x+1)^2}{\sqrt{4x+1}} \\ =& \frac{32x^2 + 24x + 4}{\sqrt{4x+1}} + \frac{8x^2 + 8x + 2}{\sqrt{4x+1}} \\ =& \frac{40x^2 + 32x + 6}{\sqrt{4x+1}} \\ =& \frac{2(10x + 3)(2x+1)}{\sqrt{4x+1}}. \end{aligned}$$ WolframAlpha's answer.
H: understanding the proof of $\int_A p'(t)dt=\frac{1}{2}\int_0^1 p'(t)dt$ I need to understand the proof of a theorem from a book called "geometric group theory by Graham and Roller". The theorem says: Let $p:[0,1]\rightarrow \mathbb{R}^n$ be $C^1$-path. Then there exists an open subset $A\subset[0,1]$ such that $\int_A p'(t)dt=\frac{1}{2}\int_0^1 p'(t)dt(=\frac{1}{2}p(1))$. and the proof is: Every point $s=(t_1,...,t_n)\in \mathbb R^n$ lying on the (topological) sphere $\sum^n_{i=1} |t_i|=1$ defines a partition of $[0,1]$ into two subsets $[0,1]=A_+^s \cup A_-^s$, as $[0,1]$ is covered by $n$ intervals and by definition $t\in A_+^s $ if it is contained in such interval, say $[|t_1|+...+|t_i|,|t_1|+...t_{i+1}|]$ where $t_{i+1}\geq0$ and $t \in A_-^s$ if $t_{i+1}$ corresponding to $t$ is negative. Then we have a (continuous!) map of our sphere to $\mathbb R^n$ given by $s\mapsto \int_{A_+^s} p'(t)dt -\int_{A_-^s} p'(t)dt$ which is by Borsuk-Ulam theorem vanishes at some $s$ in the sphere. Then we take the smallest two sets $A_+^{s_0}$ and $A_-^{s_0}$ for $A$. AI: If you pick some $n$ numbers $t_i$, with $\sum_i |t_i|=1$, then you get an increasing sequence $q_i = \sum_{j=0}^{i}|t_i|$, $$ 0 = q_0 \leq q_1 \leq q_2 \leq \cdots \leq q_n = 1. $$ So the $q_i$ partition the interval $[0,1]$ into $n$ intervals. Then you take the intervals $[q_{i-1}, q_i]$ and put them into either $A_-$ or $A_+$ depending on the sign of $t_i$. The point of doing this is that the map $$f:(t_1,\ldots,t_n)\mapsto \int_{A_+}p'(t)\,dt - \int_{A_-}p'(t)\,dt $$ is a continuous map from the ball $\{(t_i)\}$ to $\mathbb{R}^n$, so you can apply the Borsuk-Ulam theorem to it, and use the fact that $f(-t_i) = -f(t_i)$. Once you do, all you have left is to note that $A_+ = [0,1] \setminus A_-$, because of how $A_\pm$ are defined. Does this answer your question?
H: Convexity of a function depending on value of parameters Check out convexity of a function $J(u)=cu^r$, $J:[a,b]\rightarrow R$, $0<a<b<\infty$, depending on values of parameters $c,r\in R$. I know a definition of a convexity: "Function J,defined on convex interval U, is convex on U if $J(\alpha u + (1-\alpha) v)\leq \alpha J(u) + (1-\alpha) J(v)$ and I know theorems: Theorem 1 Let U be non-empty convex set in $R^n$ and $J\in C^1(U)$. Function $J$ is convex if and only if $\langle J'(u)-J'(v),u-v\rangle \geq 0$ for all $u,v\in U$ Theorem 2 Let U be convex set in $R^n$ with non-empty interior. Let $J\in C^2(U)$. Function $J$ is convex on U if and only if $\langle J''(u) ξ,ξ\rangle \geq 0$ for all $u\in U$ and all $ξ\in R^n$. But I don't know what to use. I tried in 3 ways: Using Theorem 1 $u,v\in [a,b]$. $\langle J'(u)-J'(v),u-v\rangle = \langle cru^{r-1}-crv^{r-1},u-v\rangle = cr\langle u^{r-1}-v^{r-1},u-v\rangle = cr \sqrt[2] {|u^{r-1}-v^{r-1}|^2+|u-v|^2}$ If this is ok, then J is convex if 1) $c,r\geq 0$ 2) $c,r\leq 0$ Using Theorem 2: $J'(u)=rcu^{r-1}$ $J''(u)=r(r-1)cu^{r-2}$ $\langle J''(u) ξ,ξ\rangle=\langle r(r-1)cu^{r-2} ξ,ξ\rangle$ and I don't know what now. Using definition I should show that $J(\alpha u + (1-\alpha) v)\leq \alpha J(u) + (1-\alpha) J(v)$ for all $u,v\in [a,b]$ So, let $u,v\in [a,b]$. $J(\alpha u + (1-\alpha) v)=c(\alpha u + (1-\alpha) v)^r$ And I don't know what to do now. AI: Note that $J''(u) = cr(r-1)u^{r-2}$. Then just work through the possibilities: If $r=0$, then $J(u)$ is a constant, hence convex and concave. If $c =0$, then $J(u)$ is a constant, hence convex and concave. If $r=1$, then $J$ is convex and concave, regardless of $c$. If $r > 1$ and $c > 0$, then $J''(u) \ge 0$, hence convex. If $r > 1$ and $c < 0$, then $J''(u) < 0$, hence concave (and not convex). If $0 < r < 1$ and $c > 0$, then $J''(u) < 0$, hence concave (and not convex). If $0 < r < 1$ and $c < 0$, then $J''(u) > 0$, hence convex. If $r <0$ and $c >0$, then $J''(u) > 0$, hence convex. If $r <0$ and $c <0$, then $J''(u) < 0$, hence concave (and not convex).
H: Solving distributional differential equation How to solve differential equation in $\mathcal D'(R)$: $$u''+u=\delta'(x),$$ where $\delta$ is Dirac Delta function? Solution of homogeneous problem is $C_1\cos{x}+C_2\sin{x}$, so using the variation of parameters, I got that the final solution to the problem should be: $$-\cos{x}\int\delta'(x)\sin{x}\mathrm dx+\sin{x}\int\delta'(x)\cos{x}\mathrm dx,$$ but I don't know how to evaluate integrals $\int\delta'(x)\sin{x} \mathrm dx$, $\int\delta'(x)\cos{x}\mathrm dx$. Any help is appreciated. Thanks in advance. AI: Integrating by parts, we have $$\int\delta'(x)\sin x \ dx = \delta(x)\sin x - \int\delta(x)\cos x\ dx = 0 - \theta(x) = -\theta(x),$$ where $\theta(x)$ is the Heaviside step function. (The distributional product of $\delta(x)$ with $\sin x$ is zero.) I leave it to you to show that $\int\delta'(x)\cos x\ dx = \delta(x)$. It is a good exercise to show that the resulting particular solution $$u(x) = \theta(x)\cos x$$ satisfies the inhomogeneous differential equation.
H: showing that $S=\sum^n_{i=1} \ln(X_i)$ is a complete sufficient statistic We have a random sample $X_1,X_2,\ldots,X_n$ from a probability with density: $$ f(x)=\theta x^{-(\theta +1)} $$ given that $x>1$ and $0$ else. now the question is: Show that $S=\sum^n_{i=1} \ln(X_i)$ is a complete sufficient statistic and use this result to derive the UMVUE for $\frac{1}{\theta}$. Now I made the joint density function like so: $$ \begin{align} f_\bar{x}(x_1,\ldots,x_n;\theta) &= \theta^n\prod_{i=1}^n x_i^{-(\theta +1)}I_{(1,\infty)}(x_i)\\ &= \theta^n\prod_{i=1}^n [I_{(1,\infty)}(x_i) ] e^{\sum(-\theta-1)\log(x_i)}\\ &=\theta^n\prod_{i=1}^n I_{(1,\infty)}(x_i)e^{(-\theta-1)\sum \log(x_i)} \end{align} $$ Now im doing this blindy, but im just following my book. what am I doing? what does being a complete sufficient statistic tell me? This a kink in my brain, I cant do this sort of thing without knowing what it is. Also how do i go from here? how to get the UMVUE? AI: You can write $\prod_i I_{(1,\infty)}(x_i)$ as $I_{(1,\infty)} (\min\{ x_1,\ldots,x_n\}) \cdot \prod_i x_i$. That matters in contexts where instead of $(1,\infty)$ you have $(\kappa,\infty)$ and $\kappa$ itself is to be estimated. It means that the minimum observation is itself one component of a sufficient tuple. One thing to be careful about is that your last displayed expression is really $$ \theta^n\left(\prod_i I_{(1,\infty)}(x_i) \right)\left(e^{(-\theta-1)\sum_i\log x_i}\right) $$ (where, of course $\log$ is the same thing as $\ln$; I mention this since you used both notations in your posted question). Fisher's factorization criterion tells you that that sum of logarithms is indeed sufficient. What that means is: The conditional probability distribution of $X_1,\ldots,X_n$ given the value of $\sum_{i=1}^n\log X_i$ does not depend on $\theta$. Next, what does "complete" mean? A complete statistic is one that admits no unbiased estimator of $0$ except the trivial one. That means there is no function $g$ such that $$ E\left(g\left(\sum_{i=1}^n \log X_i \right)\right) $$ remains equal to $0$ as $\theta$ changes (where of course one must not allow $g$ to vary as $\theta$ changes). The Lehmann-Scheffe theorem now says you can get the UMVUE by starting with any crude unbiased estimator of $1/\theta$ --- call this estimator $T=T(X_1,\ldots,X_n)$ --- and finding $E(T\mid \sum_{i=1}^n \log X_i)$. Because of sufficiency this will be a function of the data that does not depend on $\theta$ ---- hence a statistic. It will be the UMVUE. It may be a good idea to seek your crude unbiased estimator of $1/\theta$ among functions of $X_1$ alone rather than all of the $X$s, simply because it's easier to find. Then you have to find the conditional expecation, which may take some work.
H: If modulus of each one of eigenvalues of $B$ is less than $1$, then $B^k\rightarrow 0$ Let $B$ be a $n\times n$ matrix and let $X$ be the set of all eigenvalues of $B$. Prove that if $|m|<1$ then $\lim \limits_{k\rightarrow\infty}B^k=0$, where $m=\max X$. Thanks. Actually, there isn't a order involved. Sorry. The correct question is: Let $B$ be a $n\times n$ matrix, $X$ be the set of all eigenvalues of $B$ and $|X|=\{|x|;x\in X\}$. Prove that if $|m|<1$ then $\lim \limits_{k\rightarrow\infty}B^k=0$, where $m=\max |X|$. AI: In my answer I'll be assuming that the OP means that $X$ is the set of absolute values of the eigenvalues of $B$, (as suggested by the title), which makes $m$ be $B$'s spectral radius. Consider the Jordan Normal Form, this and this. Let $J$ be a jordan matrix for $B$. Then $B=P^{-1}JP$ and $(B^k)_{k\in \Bbb N}$ converges if, and only if, $(J^k)_{k\in \Bbb N}$ converges. Now consider the powers of $J$. The above links should make it clear that if $\color{grey}{m=\rho (J)=}\rho (B)<1$, then $\lim \limits_{k\to +\infty }(J^k)=0$ and the result follows.
H: extension of a valuation of $K$ to $K(X)$ Let $A$ be a Krull ring and $p$ a prime ideal of height 1. Then $A_p$ is a DVR with corresponding valuation $v$ on the field of fractions $K$ of $A$. Question: Can we extend this valuation to an additive valuation of $K(X)$? Remark: Matsumura in his Commutative Ring Theory p.89, says that this can be done by defining $v(a_0+a_1x+\cdots+a_nx^n)=\min_{i}v(a_i)$. However, this extension of $v$ does not seem to me to be additive, i.e. $v(f(x)g(x)) \neq v(f(x))+v(g(x))$. What am i missing? AI: This is easy: consider $f=a_0+a_1x+\cdots+a_mx^m$ and $g=b_0+b_1+\cdots+b_nx^n$. Denote $v(f)=r$, $v(g)=s$ and take $i$, respectively $j$ minimal with the property that $v(a_i)=r$, respectively $v(b_j)=s$. It's obvious that $v(fg)\ge r+s$, and it remains to prove that the equality holds. Set $c=\sum_{k+l=i+j}a_kb_l$. We have $v(a_kb_l)=v(a_k)+v(b_l)>v(a_i)+v(b_j)=v(a_ib_j)$ $=$ $r+s$ for $(k,l)\neq(i,j)$. This shows that $v(c)=r+s$. (Here I've used the following well known property of valuations: if $v(a)>v(b)$, the $v(a+b)=v(b)$.)
H: Recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21 so the question is (to be solved with recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21? So my solution: $a_n=10a_{n-1}-2a_{n-2}$. The $10a_{n-1}$ represents the number of strings of n length of digits from 0 to 9, and the $2a_{n-2}$ represent the strings of n-length with the 12 or 21 strings included. Just wanted to know if my recursion is correct, if so, I'll be able to solve the rest. Thanks in advance! AI: We look at a slightly different problem, from which your question can be answered. Call a digit string good if it does not have $12$ or $21$ in it. Let $a_n$ be the number of good strings of length $n$. Let $b_n$ be the number of good strings of length $n$ that end with a $1$ or a $2$, Then $a_n-b_n$ is the number of good strings of length $n$ that don't end with $1$ or $2$. We have $$a_{n+1}=10(a_n-b_n) +9b_n.$$ For a good string of length $n+1$ is obtained by appending any digit to a good string that doesn't end with $1$ or $2$, or by appending any digit except the forbidden one to a good string that ends in $1$ or $2$. We also have $$b_{n+1}=2(a_n-b_n) + b_n.$$ For we obtain a good string of length $n+1$ that ends in $1$ or $2$ by appending $1$ or $2$ to a string that doesn't end with either, or by taking a string that ends with $1$ (respectively, $2$) and adding a $1$ (respectively, $2$). The two recurrences simplify to $$a_{n+1}=10a_n-b_n\qquad\text{ and}\qquad b_{n+1}=2a_n-b_n.$$ For calculational purposes, these are good enough. We do not really need a recurrence for the $a_i$ alone. However, your question perhaps asks about the $a_i$, so we eliminate the $b$'s. One standard way to do this is to increment $n$ in the first recurrence, and obtain $$a_{n+2}=10a_{n+1}-b_{n+1}.$$ But $b_{n+1}=2a_n-b_n$, so $$a_{n+2}=10a_{n+1}-2a_n+b_n.$$ But $b_n=10a_n-a_{n+1}$, and therefore $$a_{n+2}=9a_{n+1}+8a_n.$$ Remark: It would have been better to have $b_n$ as above, and $c_n$ the number of strings that do not end in $1$ or $2$, and to forget about $a_n$ entirely for a while.
H: Matrix similar to a companion matrix I am currently intensively reading my linear algebra notes under dim light and was wondering whether it is true, that a an endomorphism whose minimal polynomial has the same degree as the dimension of the vector space is similar to a companion matrix? AI: The answer is yes and we have this result: If $A$ is an $n$-by-$n$ matrix with entries from some field $K$, then the following statements are equivalent: $A$ is similar to the companion matrix over $K$ of its characteristic polynomial the characteristic polynomial of $A$ coincides with the minimal polynomial of $A$, equivalently the minimal polynomial has degree $n$ there exists a cyclic vector $v$ in $V=K^n$ for $A$, meaning that $\{v, Av, A_2v,\ldots, A_{n−1}v\}$ is a basis of $V$. Source http://en.wikipedia.org/wiki/Companion_matrix
H: Proof of the inequality $(x+y)^n\leq 2^{n-1}(x^n+y^n)$ Can you help me to prove that $$(x+y)^n\leq 2^{n-1}(x^n+y^n)$$ for $n\ge1$ and $x,y\ge0$. I tried by induction, but I didn't get a result. AI: Look. I also have tried to do it by induction. It is obvious that it holds for $n=1$ and $n=2$. Assume that it also holds for $n$. Let's prove that inequality for $n+1$: $$ 2^n(x^{n+1} + y^{n+1}) - (x+y)^{n+1} = 2^{n}(x^{n+1} + y^{n+1}) -(x+y)^n(x+y)\geq 2^n(x^{n+1} + y^{n+1}) - 2^{n-1}(x^n + y^n)(x+y) = 2^{n-1}(x^{n+1} + y^{n+1} - x^ny - y^nx) = 2^{n-1}(y(y^n - x^n) + x(x^n - y^n))= 2^{n-1}(y^n-x^n)(y-x) = 2^{n-1}(y-x)^2(y^{n-1}+xy^{n-2}+x^2y^{n-2}+\dots+y^2x^{n-2} + x^{n-1}) \geq 0. $$ So we have proved that $2^n(x^{n+1} + y^{n+1}) - (x+y)^{n+1} \geq 0$. It is similar to $(x+y)^{n+1} \leq 2^n(x^{n+1} + y^{n+1})$
H: Is the cartesian product of groups the product of a normal subgroup and its quotient group? I'm studying elementary group theory, and just seeing the ways in which groups break apart into simpler groups, specifically, a group can be broken up as the sort of product of any of its normal subgroups with the quotient group of that subgroup. So I wondered how you could do the inverse of that operation: Given two groups $A$ and $B$, construct a group $G$ which admits a normal subgroup $H$ isomorphic to $A$, such that $G/H$ is isomorphic to $B$. I think I have a proof that the cartesian product $A \times B$ (with the usual component-wise operation) verifies (1), but since I'm just starting out I'm not totally confident in my construction. Furthermore, if I'm right, is this the only group up to isomorphism satisfying (1)? Edit: I just noticed Proving the direct product D of two groups G & H has a normal subgroup N such that N isomorphic to G and D/N isomorphic to H, which seems to positively answer my question. In that case I'd like to draw attention to the follow up question above (uniqueness up to isomorphism). AI: Yes, the direct product $A \times B$ satisfies the property, as you've noticed. But it's not unique up to isomorphism. For example, the dihedral group $D_n$ has a normal subgroup $H \simeq \mathbb Z/n \mathbb Z$, with $G/H \simeq \mathbb Z/2 \mathbb Z$, but $D_n$ is not isomorphic (for $n > 1$) to $\mathbb Z/2\mathbb Z \times \mathbb Z/n\mathbb Z$. More generally, you're asking whether, given an exact sequence of the form $1 \to N \to G \to H \to 1$, is $G$ isomorphic to $N \times H$? The answer is no, as I've shown. Many counterexamples are provided by semidirect products (something you'll learn soon enough if you're studying elementary group theory). For abelian groups, the concept of Ext functor allows one to classify all such extensions (given abelian groups $A,B$, "how many" groups $G$ are there with an exact sequence $0 \to B \to G \to A \to 0$ is given by $\mathrm{Ext}(A,B)$), but this is much more advanced.
H: algorithm to determine complexity of algorithms? Given a decision problem X, can there exist an algorithm A which, given any algorithm B which solves X in finitely many steps, determines whether B runs in polynomial time? If such an A exists, when is it possible for A to run in polynomial time? AI: For some decision problems $X$, the answer to your question is affirmative, for the trivial reason that $X$ admits no polynomial-time solution, so $A$ can be the algorithm that always says "no". But for more reasonable problems $X$, there cannot be any such $A$. Consider a problem $X$ for which there is a polynomial-time algorithm $Q$ that decides $X$. I claim that, if there were an $A$ as in your question, we could use it to decide the halting problem. To see this, consider any instance of the halting problem, say Turing machine $M$ on input $z$. Let $B$ be the following algorithm for solving the problem $X$. On any input $x$, first use $Q$ to figure out the answer, but, instead of outputting the answer immediately, do two tasks in parallel: (1) count from 1 to $2^n$, where $n$ is the length of $x$, and (2) run $M$ on input $z$. As soon as either of these tasks is finished, output the answer that $Q$ gave. If machine $M$ on input $z$ doesn't halt, then task (2) will never be completed, so $B$ will produce an output only after exponential time $2^n$. But if $M$ on $z$ halts after $s$ steps, then $B$ will produce its output after a polynomial number of steps, essentially the time needed by $Q$ plus (a constant overhead factor times) $s$. So $B$ takes polynomial time iff $M$ on $z$ halts. Now the program for $B$ can be effectively computed from that of $M$ and $z$. So by producing $B$ and then applying the assumed algorithm $A$ to it, we could decide the halting problem. Since the halting problem is known to be undecidable, it follows that there cannot be such an $A$.
H: Looking for bounds of a recursively defined sequence I'm looking for the tightest upper and lower bounds on the sequence defined recursively by $a_{0}=1$ and $a_{n}={\displaystyle \sum_{k=0}^{n-1}\frac{4}{n^{2}}a_{k}+c\cdot n}$ for $c>0$. It is obvious that $a_{n}\in\Omega\left(n\right)$ and I managed to show that $a_{n}\in O\left(n\log n\right)$ but I'm not sure if either is tight. Help would be appreciated! AI: Hint: Show by induction that $a_n\leqslant(7c+3)n+2c+1$ for every $n\geqslant0$. Thus, $a_n=\Theta(n)$. Furthermore, $\frac{a_n}n\to c$. Edit: To see that $a_n=O(n)$ is direct once one knows that $a_n=O(n\log n)$ since using $a_n=O(n\log n)$ in the recursion yields $a_n\leqslant\frac4{n^2}\sum\limits_{k\leqslant n}k\log k+cn=cn+O(\log n)$. But it is not necessary to assume that $a_n=O(n\log n)$ to proceed. To wit, once one got the idea that $a_n=O(n)$, one can try to confirm this idea by establishing an upper bound $a_n\leqslant\alpha n+\beta$ which is both true at $n=0$ (this is so if $\beta\geqslant1$) and hereditary. Thus, one wants that, for every $n\geqslant1$, $$ \alpha n+\beta\geqslant\frac4{n^2}\sum_{k=0}^{n-1}(\alpha k+\beta)+cn=\frac2{n}\alpha(n-1)+\frac4n\beta+cn. $$ If $n=1$, this reads $\alpha\geqslant3\beta+c$. In general, the condition reads $$ (\alpha-c)n^2-(2\alpha-\beta)n+2\alpha-4\beta\geqslant0, $$ which, using $n^2\geqslant2n$ if $n\geqslant2$, is guaranteed as soon as $$ (\beta-2c)n+2\alpha-4\beta\geqslant0. $$ If $\beta\geqslant2c$, since we already assumed that $\alpha\geqslant3\beta+c$, the LHS is at least $2(\beta-2c)+2(3\beta+c)-4\beta=4\beta-2c\geqslant0$. Finally the property $a_n\leqslant\alpha n+\beta$ is true for $n=0$ and hereditary if $\beta\geqslant1$, $\alpha\geqslant3\beta+c$, and $\beta\geqslant2c$, for example if one chooses the values of $\alpha$ and $\beta$ in the hint.
H: Why $O(\epsilon^{-1})\ll O(\epsilon^{-3/2})$ When looking for the approximate roots of $\epsilon^2x^6-\epsilon x^4-x^3+8=0$, since this is a single perturbation problem, we need to track down the three missing roots, so we consider all possible dominant balances between pairs of terms as $\epsilon\to 0$. Now suppose $\epsilon^2x^6\sim\epsilon x^4$ ($\epsilon\to 0$) is the dominant balance. Then $x=O(\epsilon^{-1/2}) (\epsilon\to 0)$. It follows that the terms $\epsilon^2x^6$ and $\epsilon x^4$ are both $O(\epsilon^{-1})$. But $\epsilon x^4\ll x^3=O(\epsilon^{-3/2})$ as $\epsilon\to 0$. Instead if writing the entire comparison, I will write the comparison using shorthand $\ll$. Now I am trying to understand why $O(\epsilon^{-1})\ll O(\epsilon^{-3/2})$ as $\epsilon\to 0$. I am finding it hard to get the logic behind it or at least understand it in a more general setting. I tried to reason as follows: We know $O (\epsilon^{-1/3})\ll O(\epsilon^{-1})$ is obvious, then I guess it is also the case that $O (\epsilon^{-1/2})\ll O(\epsilon^{-1})$ because taking the root (square and cube) of $\epsilon$ makes it bigger for small $\epsilon$, since $\epsilon$ is in the denominator, the whole fraction gets smaller. But $O (\epsilon^{-3})\gg O(\epsilon^{-1})$, because for small $\epsilon$ taking power of it makes it smaller, since $\epsilon$ is in the denominator, hence the whole fraction gets larger. Then if we "combine" the operation of taking square root and raising to the power of 3. how can we conclude $O(\epsilon^{-1})\ll O(\epsilon^{-3/2})$? Why not $O(\epsilon^{-1})\gg O(\epsilon^{-3/2})$? Is there any general rule to compare $O(\epsilon^{-1})$ with any $O(\epsilon^{-n/m})$? and to make it even more general to compare $O(\epsilon^{-n/m})$ with $O(\epsilon^{-r/s})$? Thanks! AI: Let $p$ and $q$ be positive real numbers with $p < q$. Then $$ \frac{\epsilon^{-p}}{\epsilon^{-q}} = \epsilon^{q-p} \to 0 $$ as $\epsilon \to 0$ with $\epsilon > 0$.
H: Methods to show polynomials are irreducible I would like to show that $x^3 + x^2 - 2x - 1$ is an irreducible polynomial over $\mathbb{Q}$. What are my standard lines of attack to solve this problem? Typically I go to Eistenstein, but it does not apply to this polynomial (I believe). I'm familiar also with Gauss' lemma, are there other theoretical tools I can use to prove this? Edit: Rational root test? AI: Indeed, as your polynomial is degree $3 (\dagger)$, the rational root test is well-suited to the task. (We are looking to show irreduciblity over $\mathbb Q$, a.k.a. the rationals, after all!) $(\dagger)$ A degree-$3$ polynomial in $\mathbb Q$, if reducible, must have a rational root, since any reducible polynomial in $\mathbb {Q}[x]$ must factor into a product of at least one linear factor and a quadratic, (though it may reduce to a product of three linear factors). This isn't necessarily the case for polynomials of greater degree.
H: Is constructing a function that DNE a sufficient counterexample to show the function does not diverge to $\infty$? Prove or disprove: If $f(x)\to 0$ as $x\to a^+$ and $g(x)\geq 1$ for all $x\in \mathbb{R}$, then $g(x)/f(x)\to\infty$ as $x\to a^+$. Counterexample: Let $f(x)=0$ and $g(x)=1$ for all $x\in\mathbb{R}$. Clearly, $f(x)\to 0$ as $x\to a^+$ and $g(x)\geq 1$ for any $a\in\mathbb{R}$, but $g(x)/f(x)=1/0$ does not exist for any $x\in\mathbb{R}$, and thus cannot diverge to $\infty$ as $x\to a^+$. Question: Is purposefully constructing a function in this manner a sufficient counterexample? Why or why not? AI: Arguably this is not a counterexample. One, fairly reasonable, way to interpret $$\lim_{x\to a+} \frac{g(x)}{f(x)}=\infty$$ to avoid the problem of $0$ in the denominator is that for all $M$, there is some $\epsilon >0$ such that for all $x\in(a,a+\epsilon]$ $\color{blue}{\textrm{where the fraction is defined}}$, $\frac{g(x)}{f(x)}>M$. In your example, the fraction is never defined, and hence for any $\epsilon$ the property holds vacuously.
H: Find the limit using a calculator We have $u_0 = 6$ and $u_{n+1} = \dfrac{1}{2} u_n + \dfrac{1}{u_n}$. We can use our graphing calculator to make a 'web diagram' (no idea what it is called in English, and I can't find it. It sometimes resembles a spider's web). When I use my calculator for very high values of n I get the same answer, $12.164$. Is this the limit? How would I be able to obtain this limit without the graphing calculator? Is it just the intersection with the line $y=x$? AI: When $x>\sqrt{2}$, one has $\sqrt{2}<\frac{x}{2}+\frac{1}{x}<x$, then $u_0>u_1>\dots>\sqrt{2}$. So the series converges, say $u_n\to u$, then $u=\sqrt{2}$ by solving $\frac{u}{2}+\frac{1}{u}=u$. (For completeness: if $0<u_0<\sqrt{2}$, then $u_1>\sqrt{2}$, the result will be the same; If $u_0<0$, then the limit is $-\sqrt{2}$ by symmetry.)
H: Fibonacci sequence, prove by induction that $a_{2n} \leq 3^n$ Let ${a_n}$ be the Fibonacci sequence. Prove by induction that $a_{2n} \leq 3^n$ (the Fibonacci sequence is defined as $a_1=1$, $a_2 = 2$, and $a_n = a_{n-1} + a_{n-2}$.) What I know $3 \leq a_{2k-2} \leq 3^k $ We need to prove that $a_{2k+2} \leq 3^{k+1} $ AI: Note that the sequence is increasing, so that $$a_n = a_{n-1} + a_{n-2} < 2 a_{n-1} \qquad (*)$$ Now, once you've established the induction hypothesis $a_{2k} \le 3^k$ for some $k$, simply expand $a_{2(k+1)}$ by the definition: $$a_{2k+2} = a_{2k+1} + a_{2k}$$ Apply $(*)$ to one of the terms, and then invoke the induction hypothesis.
H: Ham sandwich for measuers implies the classical one Ham sandwich theorem for measures: Let $\mu_1,\mu_2,\mu_3 $ be finite Borel measures on $\mathbb{R^3}$ such that every hyperplane has measure $0$ for each of $\mu_i$. Then there exists a hyperplane $h$ such that $\mu_i(h^+)= \frac{1}{2}\mu_i(\mathbb R^3)$ where $h^+$ denotes one of the half-space defined by $h$. How does this theorem imply the classical ham sandwich theorem: Let $A,B,C$ be bounded subsets of $\mathbb{R^3}$. Then there is a plane in $\mathbb R^3$ which divides each region exactly in half by volume? AI: Let $\lambda$ be the Lebesgue measure (the 'volume') in $\Bbb R^3$ and apply the theorem for $\mu_1(X):=\lambda(X\cap A)$, $\ \mu_2(X):=\lambda(X\cap B)\ $ and $\ \mu_3(X):=\lambda(X\cap C)$.
H: Existence and uniqueness of God Over lunch, my math professor teasingly gave this argument God by definition is perfect. Non-existence would be an imperfection, therefore God exists. Non-uniqueness would be an imperfection, therefore God is unique. I have thought about it, please critique from mathematical/logical point of view on Why does/doesn't this argument fall through? Does it violate any logical deduction rules? Can this statement be altered in a way that it belongs to ZF + something? What about any axiomatic system? Is it positive to make mathematically precise the notion of "perfect"? AI: Existence is not a predicate. You may want to read Gödel's onthological proof, which you can find on Wikipedia. Equally good is the claim that uniqueness is imperfection, since something which is perfect cannot be scarce and unique. Therefore God is inconsistent..?
H: Divergence and Levi-Civita connection Let $M$ be a level set of a function in $\mathbb R^3$. Then the mean curvature of $M$ is given by the trace of the second fundamental form which is a divergence term involving the Levi-Civita connection. My question is, why is this the same as the "usual" divergence of the normal vector we learned in early multivariable calculus? AI: Write $n$ for the normal vector of $M$ in $\mathbb R^3$. Recall that the second fundamental form of $M$ is just $II(X,Y) = <D_Xn,Y>$, where $X, Y \in \mathbb R^3$ are tangent vectors to $M$. Here $< \cdot, \cdot>$ is the Euclidean inner product, and $D_X n$ represents the directional derivative of $n$ in the direction $X$. It is true that the second fundamental form is defined in terms of the Levi-Civita connection, but in $\mathbb R^3$ recall that $\nabla_X Y = D_X Y.$ Now, by the definition of divergence, we see that the mean curvature $H$ satisfies $$ H = \mathbb{tr} II = \mathbb{div} \,n$$ which is just the usual version of divergence you learn in multivariable calc. To be a little bit more precise, the general definition of divergence is $$\mathbb{div} X = \mathbb{tr} (X \to \nabla X),$$ i.e. the trace of the map sending $X$ to $\nabla X$. Since $\nabla = D$ in $\mathbb R^3$, you can see that this agrees with the usual definition of divergence for a vector in $\mathbb R^3$.
H: a sigma-algebra on a countable set is a topology I am trying to prove the statement in the title, i.e. that a sigma-algebra $\Sigma$ on a countably infinite set $X$ is a topology on $X$. I feel like I have an intuition of why this is true, but I can't articulate it. I'd like to request some hints to get me thinking in the right direction. (Also, if the proof involves choice, could you kindly elaborate on that part? I have not studied that before!) Thanks in advance! Edit: this fact is stated (without proof) in an answer to a related question: https://math.stackexchange.com/a/51229/39117. AI: Edit: To get this result, we must (I believe) assume the Axiom of Countable Choice (AC$_\omega$), which says, roughly, that if I have a countably infinite collection of non-empty sets $A_n$, then there is a sequence $\{a_n\}_n$ of elements of $\bigcup_nA_n$ such that $a_n\in A_n$ for each $n$. The only sticking point of showing that a sigma algebra $\Sigma$ is a topology is in whether or not arbitrary unions of infinitely-many (not necessarily countably-many) $\Sigma$-sets are again $\Sigma$-sets. By AC$_\omega$, we may rewrite any such union as a countable union as follows. Take any infinite $\mathcal A\subseteq\Sigma$ and let $A:=\bigcup\mathcal A$. We must show that $A\in\Sigma$. Now, if $A$ can be obtained as a finite union of $\Sigma$-sets, then we're done, so suppose not. It follows that $A$ must be infinite, so countably infinite since $A$ is a subset of the countable set $X$. Let $\{x_n\}$ be any enumeration of $A$, and for each $n$, let $$\mathcal A_n:=\{B\in\mathcal A:x_n\in B\}.$$ By definition of $A$ and the $x_n$, each $\mathcal A_n$ is non-empty, and $\mathcal A=\bigcup_n\mathcal A_n$. By AC$_\omega$, there is a sequence $A_n$ of elements of $\mathcal A,$ such that $A_n\in\mathcal A_n$ for all $n$, meaning in particular that each $A_n$ is a subset of $A$, and is a $\Sigma$-set containing $x_n.$ Thus, $A=\bigcup_n A_n,$ so $A$ is a countable union of $\Sigma$-sets, and so $A\in\Sigma,$ as desired. Without AC$_\omega$, we certainly can't use this argument, and we may not be able to get there at all (though I'm not sure about that). A lot of things can go very wrong if AC$_\omega$ fails--for example, a countable union of pairwise disjoint, $2$-element sets may be uncountable. This sort of thing is rather upsetting to people, so most people will take AC$_\omega$, at least, for granted, even if they don't assume stronger choice principles than that.
H: How to change this geometric progression into a direct one We have the progression: $$u_0 = 5$$ $$ u_{n+1} = 3- 0.5u_n \space ( n = 1,2,...)$$ I want to change this into a 'direct' progression, so I tried: $$ 5 \cdot -0.5^n + 3$$ but this actually fails for $n=0$. How can I edit this so it also holds for $n=0$? AI: If I understand the problem correctly, we do not have a geometric progression. Instead, we have the recurrence $$u_{n+1}=3-(0.5)u_n,$$ with $u_0=5$. We want an explicit formula for $u_n$. There are many techniques for dealing with this sort of problem. We use one of the basic ones. Let $u_n=c+y_n$, where we will choose $c$ to make the recurrence nicer. Then our recurrence can be rewritten as $$c+y_{n+1}=3-(0.5)(c+y_n)$$ or equivalently $$y_{n+1}=3-(0.5)c-c -(0.5)y_n.$$ To make the constant term in the recurrence disappear, we choose $c=2$. We arrive at the recurrence $$y_{n+1}=(-0.5)y_n,$$ with $y_0= 3$. This has the solution $y_n=3(-0.5)^n$, and therefore we get $$u_n=2+3(-0.5)^n$$
H: Odd part of $n-1$ and primes Using $n=11$ as an example: Step 1 : 11 - 1 = 10. Get the odd part of 10, which is 5 Step 2 : 11 - 5 = 6. Get the odd part of 6, which is 3 Step 3 : 11 - 3 = 8. Get the odd part of 8, which is 1 Continuing this operation (with $11-1$) repeats the same steps as above. There are three consecutive odd numbers $1,3,5$ from $1$ to $(n-1)/2$ in the cycle, so the number $11$ has a "full counter cycle". Is there any counterexample that a number has a "full counter cycle" but isn't prime? AI: Let $n$ be a positive integer, and define the sequence $(c_i)_{i\geq0}$ inductively by $$c_0=n-1,\qquad c_{i+1}=\operatorname{odd}(n-c_i).$$ I understand your question to be: Is there a composite $n$ such that every odd number less that $\frac{n-1}{2}$ occurs in the sequence $(c_i)_{i\geq0}$? Note that $\gcd(n,c_0)=1$, and that $$\gcd(n,c_{i+1})=\gcd(n,\operatorname{odd}(n-c_i))\mid\gcd(n,n-c_i)=\gcd(n,c_i).$$ Hence $\gcd(n,c_i)=1$ for all $i\geq0$. Hence no divisor of $n$ other than $1$ occurs in the sequence $(c_i)_{i\geq0}$. So no counterexample exists with $n$ composite and odd. A counterexample with $n$ composite and even is easy; take $n=4$.
H: The sum of two periodic functions need not be a periodic function Let $f(x)=x-[x]$ and $g(x)=\tan x$. How could we see that $f(x)-g(x)$ is not a periodic function? This will show that the sum of two periodic functions need not be a periodic function. I hope the answer has enough details so that I could catch you. AI: First, notice the range of g is not $\mathbb{R}$ but $\mathbb{R} \cup \{ \infty \}$. Second, $g$ and hence $f - g$ take the value $\infty$ $\color{red}{\text{at and only at}}$ $x = \pm \frac{\pi}{2}, \pm \frac{3\pi}{2}, \pm\frac{5\pi}{2}, \ldots$. This means if $f - g$ is a periodic function, then its period must have the form $n\pi$ where $n \in \mathbb{Z}_{+}$. If $n\pi$ is a period, we will have: $$f(n\pi) - g(n\pi) = f(0) - g(0)\quad\implies\quad n\pi - \lfloor n\pi\rfloor = 0 \quad\implies\quad \pi = \frac{\lfloor n\pi\rfloor}{n} \in \mathbb{Q}$$ This contradicts with the known fact that $\pi$ is an irrational number.
H: for $z\in\mathbb C-\{0\},~\dfrac{1}{1+nz}\to0.$ How to show that for $z\in\mathbb C-\{0\},~\dfrac{1}{1+nz}\to0.$ I've tried triangle inequality couldn't arrive at any conclusion. Please help me. AI: Presumably you mean as $n \to \infty$? The triangle inequality is your friend. In particular, $|1+nz| \ge n|z|-1$. If you want more: Let $\epsilon>0$ and choose $N > \frac{1}{|z|} (1+\frac{1}{\epsilon})$. Then if $n \ge N$, we have$|\frac{1}{1+nz}| = \frac{1}{|1+nz|} \le \frac{1}{n|z|-1}< \frac{1}{\frac{1}{\epsilon}} = \epsilon$.
H: number theory: Let $m>n$ for $m,n\in\mathbb{Z}$, prove if $k$ divides $m$ and $k$ divides $n$ then $k$ divides $m\bmod{n}$ Let $m>n$ for $m,n\in\mathbb{Z}$, prove if $k$ divides $m$ and $k$ divides $n$ then $k$ divides $m\bmod{n}$. How should I approach this question? I only got $m=qk$ and $n=pk$ if $\frac{m}{n}=\frac{q}{p}$. AI: Hint: $\,\ ak\ {\rm mod}\,\ bk\, =\, ak - q(bk)\, =\, (a\!-\!qb) k$
H: How to compute $\mathbb{P}(\lambda X>4)$ directly? Given a random variable $X$ which is exponentially distributed i.e. $X\sim E(\lambda)$. Calculate $\mathbb{P}(X-\frac{1}{\lambda}>\frac{3}{\lambda})$. My working: $\mathbb{E}(X)=\frac{1}{\lambda}$, $Var(X)=\frac{1}{\lambda^2}$. Then $\mathbb{P}(X-\frac{1}{\lambda}>\frac{3}{\lambda})=\mathbb{P}(X>\frac{4}{\lambda})=\mathbb{P}(\lambda X>4)$. Then I am not sure how to compute this. The solution says its equal to $e^{-4}$, furthermore it says $\mathbb{P}(\lambda X>4)=e^{-4}$ is a direct consequence of standardised random variable. That is, it can be computed by considering the standardised random variable. My question is how can we relate or interpret $\lambda X>4$ to standardised X. What is standardised X in this case? is $\lambda X\sim\ E(1)$? Why and why not? Thanks! AI: A standard exponential, as you conjectured, is an exponential with mean $1$, and therefore parameter $\frac{1}{1}=1$. So the variance of a standard exponential is also $1$. (Referrring to a standard exponential is much less common than referring to a standard normal.) Let $X$ have exponential distribution with parameter $\lambda$, so with mean $\dfrac{1}{\lambda}$. Let $Y=\lambda X$. Then $Y$ has exponential distribution with mean $\lambda \cdot \dfrac{1}{\lambda}=1$. So $Y$ has standard exponential distribution. Thus $Y=\lambda X$ is the standardized version of $X$. For your problem about $\Pr(X\gt \frac{4}{\lambda}$, where $X$ is exponentially distributed with parameter $\lambda$, there are two possible approaches. $1.$ In general, the probability that $X\gt x$ is $e^{-\lambda x}$. Putting $x=\frac{4}{\lambda}$, we get that the probability is $e^{-4}$. $2.$ We want the probability that $X\gt \frac{4}{\lambda}$. Now $X\gt \frac{4}{\lambda}$ if and only if $\lambda X\gt 4$. But $\lambda X$ is just the standardized exponential $Y$ discussed above. And since $Y$ has parameter $1$, we have $\Pr(Y\gt 4)=e^{-4}$. The question really asked what is the probability of being more than $4$ "standard deviation units" from $0$. I am using that terminology to bring out the analogy with standard normal calculations.
H: $\sup_{x\in A}x \sup_{y\in B}y=\sup_{x\in A,y\in B}xy$ Let $A$ and $B$ be two sets of nonnegtive numbers. Prove that $\sup_{x\in A}x \sup_{y\in B}y=\sup_{x\in A,y\in B}xy$. Thanks for your help. AI: If $x \in A$, then $x \le \sup A$. Similarly, $y \le \sup B$. Since $x,y \ge 0$, we have $xy \le \sup A \sup B$. It follows that $\sup_{x \in A, y \in B} x y \le \sup A \sup B$. In the other direction, if $x \in A, y \in B$, then $xy \le \sup_{x' \in A, y' \in B} x' y'$. Then $\sup_{y \in B} x y = x \sup B \le \sup_{x' \in A, y' \in B} x' y'$. Since this is true for all $x \in A$, and both $x\ge0$ and $\sup B \ge 0$, we have $\sup_{x \in A} x \sup B =\sup A \sup B \le\sup_{x' \in A, y' \in B} x' y'$.
H: Derivative of Trace of Matrix wrt parameters I have the following function which I need to find the derivative of $$L=trace(\Sigma K^{-1})$$ where $K$ is a function of $\theta$ and $\Sigma$ is constant. If I'm correct what I need to do to find $\frac{\partial L}{\partial \theta}$ is $\frac{\partial L}{\partial K}\times\frac{\partial K}{\partial \theta}$. Correct me if I am wrong. Question is what is $\frac{\partial L}{\partial K}$ ? AI: This is not to hard if you write it out in components. Since $ L = \text{Tr}(A_{ij}) = \sum_i A_{ii} $, $ \frac{dL}{d\theta} = \sum_i \frac{dA_{ii}}{d\theta} = \text{Tr} (\frac{dA_{ij}}{dt}) $. So it is not quite a chain rule with matrix multiplication the way you wrote it out. For inside the trace, you can also use the $ \frac{d}{d\theta} K^{-1} = -K^{-1} \frac{dK}{d\theta} K^{-1} $, which you can derived from $ K K^{-1} = I $.
H: Changing Summation Index Question I'm sorry if this seems like a very novice question, but I am still relatively new to the world of discrete math ( still in 9th grade). I've been reviewing some of the concepts I learned in a chapter from Concrete Mathematics (Graham,Knuth,Patashnik) about Sums, and I seem to have completely missed something that threw me off. I remember going over this problem, so I decided to re-solve it just to make sure I was 100% percent sure I knew those concepts. But, I've been trying to get it for a while and I still cannot find out the answer to my problem. The problem starts as follows: \begin{equation} S = \sum_{0 \le k \le n} (a + bk) \end{equation} Using the commutative law, the index $k$ can be re-written as $n-k$ \begin{equation} S = \sum_{0 \le (n-k) \le n} (a + b(n-k)) \end{equation} And this can then equal \begin{equation} S = \sum_{0 \le k \le n} (a + bn-bk) \end{equation} My question is not as to how we got $a + bn - bk$, but as to why the index can change from $n-k$ to $k$ from the previous equation? Why and how can this be done? AI: The author's contention seems to be that the two sums listed are the same, as looping over the values in the set $0 \leq n-k \leq n$ is the same as looping over $k$ such that $0 \leq k \leq n$. To see why this is true, we just need to see that the values they loop over are the same. Then, by the commutativity of addition, the order of the values appearing in the sum doesn't matter, so the two sums can be taken to be equal. Then it is a matter of seeing that if $0 \leq k \leq n$ defines the same set as $0 \leq n-k \leq n$, which we can see as follows: $k \in \mathbb{N}$ such that $0 \leq k \leq n$ corresponds to the set of values $\{0, 1, 2, \ldots, n-1, n\}$. $n-k \in \mathbb{N}$, then, such that $0 \leq (n-k) \leq n$, corresponds to the set of values $\{n, n-1, n-2, \ldots, 1, 0\}$. Clearly, these two sets are the same.
H: Orthogonality and linear independence [Theorem] Let $V$ be an inner product space, and let $S$ be an orthogonal subset of $V$ consisting of nonzero vectors. Then $S$ is linearly independent. Also, orthogonal set and linearly independent set both generate the same subspace. (Is that right?) Then orthogonal $\rightarrow$ linearly independent but orthogonal $\nleftarrow$ linearly independent is that right? One more question. For T/F, Every orthogonal set is linearly independent (F) Every orthonormal set is linearly independent (T) Why? AI: For the theorem: Hint: let $v_{1}, v_{2}, \ldots, v_{k}$ be the vectors in $S$, and suppose there are $c_{1}, \ldots, c_{k}$ such that $v_{1}c_{1} + \cdots + v_{k}c_{k} = 0$. Then take the inner product of both sides with any vector in the set $v_{j}, 1 \leq j \leq k$. Conclude something about the coefficient $c_{j}$ using the fact that $v_{j} \neq 0$ for all vectors $v_{j}$ in the set. For your next question, orthogonal set implies linearly independent set with the condition that all the vectors in the set are nonzero - we need this in the above proof! (I'll address that in your true false questions). You're right that linearly independent need not imply orthogonal. To see this, see if you can come up with two vectors which are linearly independent over $\mathbb{R}^{2}$ but have nonzero dot product. (It shouldn't be too hard to do so!) For your true false question, every orthogonal set need not be linearly independent, as orthogonal sets can certainly include the '$0$' vector, and any set which contains the '$0$' vector is necessarily linearly dependent. However, every orthonormal set is linearly independent by the above theorem, as every orthonormal set is an orthogonal set consisting of nonzero vectors.
H: Contraction Mapping $$f(x)=\begin{pmatrix}1/4 & 0 & 1/2 \\ 0 & 1/3 & 0\\ -1/2 & 0 & 1/4 \end{pmatrix}\begin{pmatrix}x_1 \\ x_2\\ x_3 \end{pmatrix}, \qquad\forall x=\begin{pmatrix}x_1 \\ x_2\\ x_3 \end{pmatrix} \in \Bbb R^3 $$ I am able to prove that this function is a contraction w/ standard Euclidean metric via using polar coordinates. I have heard that it is also possible to show this via finding the operator norm. How would one do this? AI: Since $$\|f(x)\| = \|Ax\| \leq \|A\| \|x\|,$$ by the definition of the operator norm: $$ \|A\| = \sup_{x\in \mathbb{R}^3}\frac{\|Ax\|}{\|x\|}. $$ In case of Euclidean norm and $A: \mathbb{R}^3\to \mathbb{R}^3$ is a linear transformation: $$ \|A\| = \sqrt{\lambda_{\mathrm{max}}(A^T A)}, $$ where $\lambda_{\mathrm{max}}(A^T A)$ is the largest eigenvalue of $A^TA$, and you wanna check this is less than $1$. In your case: $$ A^T A = \begin{pmatrix}5/16 & 0 & 0 \\ 0 & 1/9 & 0\\ 0 & 0 & 5/16 \end{pmatrix}, $$ which gives the $\lambda_{\mathrm{max}} = 5/16$, and $\|A\| = \sqrt{5}/4 < 1$, thus $$ \|f(x)\| = \|Ax\| \leq \|A\| \|x\| < \frac{\sqrt{5}}{4}\|x\| $$ and $f$ is a contraction.
H: How to prove " $¬\forall x P(x)$ I have a step but can't figure out the rest. I have been trying to understand for hours and the slides don't help. I know that since I have "not P" that there is a case where not All(x) has P... but how do I show this logically? 1. $\forall x (P(x) → Q(x))$ Given 2. $¬Q(x)$ Given 3. $¬P(x)$ Modus Tollens using (1) and (2) 4. 5. 6. AI: First, you want to instantiate your quantified statement with a witness, say $x$: So from $(1)$ we get $$\;P(x) \rightarrow Q(x) \tag{$1\dagger$}$$ Then from $(1\dagger)$ with $(2)$ $\lnot Q(x)$, by modus tollens, you can correctly infer $(3)$: $\lnot P(x)$. So, from $(3)$ you can affirm the existence of an $x$ such that $\lnot P(x)$ holds: $\quad\exists x \lnot P(x)$ Then recall that, by DeMorgan's for quantifiers,$$\underbrace{\exists x \lnot P(x) \quad \equiv \quad \lnot \forall x P(x)}_{\text{these statements are equivalent}}$$
H: Contraction Map on Compact Normed Space has a Fixed Point Let $K$ be a compact normed space and $f:K\rightarrow K$ such that $$\|f(x)-f(y)\|<\|x-y\|\quad\quad\forall\,\, x, y\in K, x\neq y.$$ Prove that $f$ has a fixed point. AI: Edit: Note that $f$ is continuous. (Why?) Define $$g(x)=\lVert f(x)-x\rVert$$ for all $x\in K$. Use the compactness of $K$ to show that $g$ obtains a non-negative minimum. If $f$ has no fixed point, we can then derive a contradiction. (Apologies for my earlier, bogus approach.) You can further show that $f$ has a unique fixed point, if you like.
H: Is This A Derivative? I am in a little over my head. This all began with my reading how each level of pascals triangle adds to $2^n$, where n=row# starting with n=0. I then though, "wouldn't it be clever if the rows added to something else--like say $3^n$ instead?" Or even better generalize it for any constant, $a^n$. All that was needed was to Multiply any given number in pascal's triangle by $a^n/2^n$ $$ \begin{array}{rcccccccccc} & & & & & & 1\\\ & & & & & \frac{a}{2} & & \frac{a}{2}\\\ & & & & \frac{a^2}{4} & & \frac{a^2}{2} & & \frac{a^2}{4}\\\ & & & \frac{a^3}{8} & & \frac{3*a^3}{8} & & \frac{3*a^3}{8} & & \frac{a^3}{8}\\\ & & \frac{a^4}{16} & & \frac{a^4}{4} & & \frac{3*a^4}{8} & & \frac{a^4}{4} & & \frac{a^4}{16}\\\ & \frac{a^5}{32} & & \frac{5*a^5}{32} & & \frac{5*a^5}{16} & & \frac{5*a^5}{16}& & \frac{5*a^5}{32} & & \frac{a^5}{32}\\\ & \frac{a^6}{64}& &\frac{3*a^6}{32}& &\frac{15*a^6}{64}& &\frac{5*a^6}{16}& &\frac{15*a^6}{64}& &\frac{3*a^6}{32}& &\frac{a^6}{64}\\\ & & ... & & & &... & & & & ... & \end{array} $$ Adding any row should give $a^n$. Placing said coefficients in front of a binomial expansion and solving for the binomial expression yields $$(a^n/2^n)*(x+y)^n$$ Letting a=2 makes pascals triangle, but every other value of "a" distorts every value and relation (except n=0 row for obvious reasons). This triangle creates some interesting relations that are shared with Pascal's triangle and are immediately obvious: every term in the middle column may be divided by "a" to yield the term above and to the right or left of it--just like Pascals triangle (a=2). Next is a really fascinating fluke: $$ \frac{\partial \frac {a^4}{4}}{\partial a}=a^3 $$, which is the sum of the line above it. $$ \frac{\partial \frac {a^2}{4}}{\partial a}=\frac {a}{2} $$, which is found in the line above it. I realize that "a" must have a definite value as a coefficient in order to have meaning, and it is not itself a function, but it seems curious that derivative relationship would show up in the relations between the coefficients of this modified triangle. This serendipitous relation fascinates me to the point of asking "whats up?" here. Is this a derivative? Have derivative relationships popped up organically elsewhere in function theory? AI: Pascal's triangle is fascinating and encodes many profound mathematical relationships. However, I think all you've observed is that if $f(a)=a^n$ then $f'(a)=na^{n-1}$. This explains why if you differentiate a term $\frac{a^n}{2^n}{n\choose k}$ in one row with respect to $a$ you'll get something resembling something in the row right above. In particular you get $a^{n-1} \frac{n!\cdot n}{2^n(n-k)!k!}$. For small n it might seem that there are patterns, particularly in the rows that are powers of 2 since the $2^n$ might go away. In general, however, this is not a meaningful manipulation of Pascal's triangle, especially since you've stolen away one of its most beautiful features: the fact that ${{n-1}\choose k-1}+{{n-1}\choose k}={n \choose k}$, equivalently that each entry is the sum of the two entries directly above. As to your derivative question, I'm not sure what you're expecting... I think what you've observed unfortunately happens to be purely incidental. But it's great that you're noticing patterns - that's how most great ideas in math come about! Derivatives have wide-ranging and incredibly useful applications. If by "derivative relationships" you mean differential equations, then heck yeah, they come up all the time in analysis and applied mathematics.
H: Finding the probability from a markov chain with transition matrix Consider the Markov Chain with state space $S=\{v,w,x,y,z\}$, transition matrix below: $$\left[\begin{array}{cccccccccc} 0 & 0.4 & 0.6 & 0 & 0\\ 0 & 0.5 & 0.5 & 0 & 0\\ 0 & 0 & 0 & 0.1 & 0.9\\ 0 & 0 & 0& 0.2 & 0.8\\ 0.7 & 0 & 0.3 & 0 & 0 \end{array}\right]$$ (where the matrix rows/columns correspond to the states in alphabetical order), and $X_0=v$. 1) What is $P(X_1=x, X_2=z, X_3=v)$? For this question, do I multiply all the states starting from state x -> state z -> state v? which I get: $$0.9 \times 0.7 = 0.63$$ Am I on the right track? 2) Find the probability distribution of $X_4$. Any hints for this question please? 3) Find $P(\{X_2=w\} \cup \{X_3=x\})$ and $P(X_2=x|X_4=z)$ Im completely getting lost. I'm assuming that the first one is finding the union of $X_2$ and $X_3$ and the second question is finding the probability of $X_2$ given $X_4$, I am not sure what these "X" values are. Anyone could you please help me or give me a hint on these questions? AI: First, let's agree on notation: by $X_n$ we mean the state at "time" $n$; and we are told that the initial state ($n=0$) is $X_0 = v$. The transition matrix, denotes $M_{i,j} = P(X_{n+1}=j | X_{n}=i)$ (the row corresponds to the 'before' state, the column to the 'after' state). For the first question, you want to compute a particular transition path. But remember that you start from $X_0$ (it can help to draw a graph of the transitions), so you actually are computing: $$P(X_1=x,X_2=z,X_3=v | X_0=v)=\\ =P(X_1=x | X_0=v) P(X_2=z|X_1=x) P(X_3=v|X_2=z) =\\ = 0.6 \times 0.9 \times 0.7$$ (The first equation is true because it's a Markov chain). For the second, you need to compute the probabilities of arriving of each one of the five states at time $n=4$. You could do that by summing all the paths that start from $X_0=v$, but that would be painful. (In general, perhaps not so much in this case because there are few transitions with positive probability). A more elegant way is to recall that the 4-step transition probabilities is given by $M^4$. Once you compute that, you just take the first row.
H: Is $f$ a differentiable function? Hello everyone I have this problem, Can somebody help me with this? $f:\mathbb{R}^2\rightarrow{}\mathbb{R}$ is defined by: $$f(x,y) = \left \{ \begin{matrix} \ln\left(\displaystyle\frac{x}{y}\right) & \mbox{if } xy\geq{0} \\ 0 & \mbox{if }xy<0\end{matrix}\right. $$ Is $f$ a differentiable function? Thanks for your help :D , have a nice day AI: The function behaves well (according to your definition) except on the axes. On these points, the function is clearly differentiable. However, at the axes, your function definition doesn't make sense (the values do not exist), and so we can't talk about it being differentiable.
H: Characteristic polynomial of a matrix $A$ is $x^7$ and $\operatorname{rank}(A)=4$ and $\operatorname{rank}(A^2)=1$. Classify $A$. I know how to do this problem the 'long' way. I was wondering if there was an easier, less computationally cumbersome way to do this. Here is the question: Let $A$ be a square matrix over $\mathbb R$. Suppose that the characteristic polynomial of $A$ is $x^7$ and $\operatorname{rank}(A)=4$ and $\operatorname{rank}(A^2)=1$. Classify all such matrices up to similarity. My strategy is to list all the possible minimal polynomials and then for each case list the possible invariant factors (using Caley Hamilton) and then look at the RCF in each case and see if it satisfies the two other conditions. But this seems not a very quick way to do this. Is there some other way (for example, one that circumvents listing the minimal polynomials) to do this?. Also, for typical problems like this is there a standard way to proceed?. Is it easier to use, for example Jordan forms rather than rational canonical forms?. As always all your help is greatly appreciated. AI: I would do this in my head as follows. Having a power of $x$ as characterisitic polynomial means $A$ is nilpotent, and we need to find its Jordan type (multiset of sizes of the Jordan blocks). Think of each Jordan block as a row of boxes, arrange then vertically in decreasing order of size, with the leftmost box aligned; this gives the Young diagram of the Jordan type. The number of boxes is the rank of$~A^0$, which is$~7$, the degree of the characterisitic polynomial. Each application of $A$ chops of the leftmost column of the diagram. Saying $\def\rk{\operatorname{rank}}\rk A=4$ means $4$ boxes remain after doing this once, so the first column had $7-4=3$ boxes. Saying $\rk A^2=1$ means one box remains after doing it twice, so the second column had $4-1=3$ boxes. The remaining box must be in the third column (and the first row). This makes the Jordan type $(3,2,2)$, and $A$ is similar to the Jordan normal forma matrix with those sizes for its blocks.
H: Area bounded by a parametric curve Be $\gamma: \mathbb{R}\rightarrow \mathbb{R}^3$ defined by: $$\gamma(\theta):= (\cos(\theta), \sin(2\theta), \cos(3\theta))$$ and be $$S:=\lbrace t\gamma (\theta): t\in [0,1], \hspace{0.1cm} \theta\in\mathbb{R}\rbrace$$ Calculate the area of $S$. I have problems understanding this exercise... When I have a parametric curve defined by $x=f(t)$ and $y=g(t)$, $\alpha \le t\le \beta$ I use the formula: $$\int_{\alpha}^{\beta} f'(t)g(t)dt$$ (I understand the formula and his demonstration). But I don't know if exists a generalization to calculate the area of any parametric curve, or if this exercise needs a different kind of approach. AI: Hint: the area element for a parametric surface ${\bf R} = {\bf R}(u,v)$ is $\left| \dfrac{\partial {\bf R}}{\partial u} \times \dfrac{\partial \bf R}{\partial v}\right|\; du\; dv$
H: Condition the tensor product of a vector is not singular I am generating a matrix with a vector through outer product in my code, and resulting matrix is singular. Is there a condition I can use to check the vector and figure out where the singularity comes from? AI: The outer product of two nonzero vectors always has rank $1$ (and more generally, the rank of a product of matrices is at most the rank of any one of the matrices). So the only way you'll get a nonsingular matrix is if the matrix is $1 \times 1$.