text
stringlengths
83
79.5k
H: Definition of ordinal exponentiation I found that the usual ordinal exponentiation $\alpha^{\beta}$ is the set of functions from $\beta$ to $\alpha$ with finite support, ordered by antilexicographic order. (least significant position first) It cannot be defined on whole functions, since $f,g:\omega \to 2$ given by $f=(0,1,0,1,\cdots)$ and $g=(1,0,1,0,\cdots)$ are not comparable. Then how about the lexicographic order? Isn't it possible to define well order on the set of all functions from $\beta$ to $\alpha$? AI: The lexiographic order defines a linear order on ${}^\beta\alpha$, but in general, it is not a well-ordering. For example, in the case ${}^\omega2$, the set $$ \{ (1,0,\ldots), (0,1,0, \ldots), (0,0,1,0,\ldots), \ldots \} $$ does not have a lexicographic least element.
H: Smooth homotopic maps and closed forms.. Does anyone have any idea for showing the following: Let $f_0, f_1:M\rightarrow N$ smooth homotopic maps between the manifolds $M$ and $N$. Suppose $M$ is compact with no boundary. Show that for every closed form $\omega\in \Omega^m(N)$ (where $m=\dim M$), $$\int_{M}f_0^*\omega=\int_Mf_1^*\omega.$$ AI: Let $F:M\times I\rightarrow N$ be a smooth homotopy between $f_0$ and $f_1$. Then, since $d$ commutes with pullbacks and by Stokes's theorem, we have \begin{align*} 0 &= \int_{M\times I} F^\ast d\omega \\ &= \int_{M\times I} dF^\ast \omega\\ &= \int_{\partial(M\times I)} F^\ast \omega \\ &= \int_{M\times\{1\}} F^\ast \omega - \int_{M\times\{0\}}F^\ast \omega\\ &= \int_M f_1^\ast\omega - \int_Mf_0^\ast \omega. \end{align*}
H: $\sqrt 2$ is even? Is it mathematically acceptable to use Prove if $n^2$ is even, then $n$ is even. to conclude since 2 is even then $\sqrt 2$ is even? Further more using that result to also conclude that $\sqrt [n]{2}$ is even for all n? Similar argument for odd numbers should give $\sqrt[n]{k}$ is even or odd when k is even odd. My question is does any of above has been considered under a more formal subject or it is a correct/nonsensical observation ? AI: Yes, essentially analogous arguments work to extend the notion of parity from the ring of integers to many rings of algebraic integers such as $\,\Bbb Z[\sqrt[n]{k}].\ $ The key ideas are: one can apply parity arguments in any ring that has $\ \mathbb Z/2\ $ as an image, e.g. the ring of all rationals with odd denominator, or the Gaussian integers $\rm\:\mathbb Z[{\it i}\,],\:$ where the image $\rm\ \mathbb Z[{\it i}\,]/(2,{\it i}-\!1) \cong \mathbb Z/2\ $ yields the natural parity definition that $\rm\ a\!+\!b\,{\it i}\ $ is even iff $\rm\ a\equiv b\ \ (mod\ 2),\ $ i.e. $ $ if $\rm\ a+b\,{\it i}\ $ maps to $\:0\:$ via the above isomorphism, which maps $\rm\ 2\to 0,\ i\to 1\:$. Generally, it is easy to show that if $\rm\:2\nmid f(x)\in \Bbb Z[x]\setminus \Bbb Z\:$ then the number of ways to define parity in the ring $\rm\ \mathbb Z[w] \cong \mathbb Z[x]/(f(x))\ $ is given by the number of roots of $\rm\: f(x)\: $ modulo $2\:.\ $ For suppose there exists a homomorphism $\rm\ h\, :\, \mathbb Z[w]\to \mathbb Z/2.\:$ Then $\rm\:w\:$ must map to a root of $\rm\:f(x)\:$ in $\rm\ \mathbb Z/2.\ $ Thus if $\rm\ f(0)\equiv 0\ (mod\ 2)\ $ then $\rm\: \mathbb Z[w]/(2,w) \cong \mathbb Z[x]/(2,x,f(x)) \cong \mathbb Z/2\ $ by $\rm\: x\mid f(x)\ (mod\ 2),\, $ and $\rm\, \!f(1)\equiv 0\ (mod\ 2) $ $\Rightarrow$ $\rm \mathbb Z[w]/(2,w\!-\!1) \cong \mathbb Z[x]/(2,x\!-\!1,f(x)) \cong \mathbb Z/2\, $ by $\rm\, x\!-\!1\,|\, f(x)\ (mod\ 2). $ Let's consider some simple examples. Since $\rm\ x^2\!+1\ $ has the unique root $\rm\ x\equiv 1\ (mod\ 2),\:$ the Gaussian integers $\rm\ \mathbb Z[{\it i}\,]\cong \mathbb Z[x]/(x^2\!+1)\ $ have a unique definition of parity, with $\:i\:$ being odd. Since $\rm\ x^2\!+x+1\ $ has no roots modulo $\: 2,\: $ there is no way to define parity for the Eisenstein integers $\rm\ \mathbb Z[w] \cong \mathbb Z[x]/(x^2\!+x+1).\, $ Indeed since $\rm\ w^3 = 1\ $ we infer that $\rm\: w \equiv 1\ (mod\ 2)\ $ contra $\rm\ w^2\!+w+1 = 0.\ $ On the other hand $\rm\ \mathbb Z[w] \cong \mathbb Z[x]/(x^2\!+x+2)\ $ has two parity structures since both $\:0\:$ and $\rm\:1\:$ are roots of $\rm\ x^2\! + x + 2\ $ modul0 $\rm\:2,\:$ so we can define $\rm\:w\:$ to be either even or odd.
H: Optimum fitting for flanges in a rectangular plate I have a $2500~\text{mm}\times6300~\text{mm}\times25~\text{mm}$ (width $\times$ length $\times$ thickness) steel plate I want to cut flanges of diameter $235~\text{mm}$ can anyone please suggest $1)$ How many flanges would fit in this plate? $2)$ A method of cutting circular flanges so that wastage is minimum? $3)$ A generalized algorithm that would help me calculate this for any plate size? (P.S: I have heard about the packing problem but i am unable to understand it) AI: When you can get a lot of flanges out of the rectangle, the optimal packing is hexagonal. You can see it . If you put one row along the long edge of your plate, you will get $\lfloor \frac {6300}{235}\rfloor = 26$ along the bottom row. You get $\lfloor \frac {2(2500-235)}{235\sqrt 3} +1\rfloor=12$ rows. Each row has $26$, so you get $312$ of them. If you put the straight edge along the short direction, you get a first row of $10$ and each row has $29$ for a total of $290$. The first is better. When the flanges are larger compared to the sheet, it gets more complicated. With other plate dimensions, the intervening rows might be one flange short.
H: Identifying traitors Logic is one of the facets of math that is more 'fun', but this one is beyond me. Consider using logic statements and/or truth tables. 'Or' here is inclusive, that is 'A or B' means 'A or B or both, but not neither'. King Warren suspects the Earls of Akaroa, Bairnsdale, Claremont, Darlinghurst, Erina and Frankston of plotting a conspiracy against him. He questions them in private and they each tell him: Akaroa: If Darlinghurst and Erina are both loyal, then Frankston and either Bairnsdale or Claremont are traitors. Bairnsdale: If either Claremont or Erina are traitors, then either Darlinghurst is a traitor and Claremont isn’t, or Erina is and Akaroa isn’t. Claremont: Anyone here could be a traitor except for me. Darlinghurst: If Frankston is a traitor and Bairnsdale’s involvement in the conspiracy implicates Erina’s involvement, then Claremont is a traitor. Erina: If Akaroa is a traitor and the fact Bairnsdale is not a traitor means that Claremont is, then Akaroa is not a traitor. Frankston: If Claremont is a traitor so is Akaroa. Each traitor will give false information but each loyal Earl will give true statements. At least how many traitors are there, and who are they? AI: Introduce boolean variables $A,B,C,D,E,F$, where $A$, for instance, is true if Akaroa is not a traitor and false otherwise, and analogously for the rest of the variables. Then we have five equations $$\begin{align} A&=(D\wedge E)\to (\neg F \wedge (\neg B \vee \neg C))\\ B&=(\neg C\vee \neg E)\to (\neg D\wedge C\vee E\wedge\neg A)\\ C&=C\\ D&=(\neg F\wedge(\neg B\to\neg E))\to\neg C\\ E&=(\neg A\wedge(B\to\neg C))\to A\\ F&=\neg C\to\neg A. \end{align}$$ We could simply try all $2^6=64$ combinations of truth values to $A,\ldots,F$ and see which solutions there are. It can be done by hand or easily by computer. There might, by chance, be a short elegant algebraic solution, but I don't want to look for it right now. This problem is an instance of 3SAT, and I don't want to spend superpolynomial time solving this problem (assuming P$\neq$NP). Added: I input this into Mathematica, and if I didn't make any mistakes, there are 26 solutions. One of them is that everyone is a traitor, and the largest number of loyal earls is 5, which happens twice. The solutions are, sorted by the number of traitors: $$\begin{matrix} A&B&C&D&E&F\\ 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 1& 0& 0\\ 0& 0& 1& 0& 0& 0\\ 1& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0& 1\\ 0& 0& 1& 0& 0& 1\\ 0& 1& 1& 0& 0& 0\\ 1& 0& 0& 0& 1& 0\\ 1& 0& 0& 1& 0& 0\\ 1& 0& 1& 0& 0& 0\\ 0& 0& 1& 1& 0& 1\\ 0& 1& 1& 0& 0& 1\\ 0& 1& 1& 0& 1& 0\\ 1& 0& 0& 1& 1& 0\\ 1& 0& 1& 0& 0& 1\\ 1& 0& 1& 0& 1& 0\\ 1& 1& 1& 0& 0& 0\\ 0& 1& 1& 0& 1& 1\\ 1& 0& 1& 0& 1& 1\\ 1& 0& 1& 1& 0& 1\\ 1& 0& 1& 1& 1& 0\\ 1& 1& 1& 0& 0& 1\\ 1& 1& 1& 0& 1& 0\\ 0& 1& 1& 1& 1& 1\\ 1& 1& 1& 0& 1& 1 \end{matrix}$$ If one would happen to know the answer beforehand, or guess it, then it is easy to verify by hand: just check that $A=0,B=C=D=E=F=1$ is a solution and that $A=B=C=D=E=F=1$ is not a solution. Thus the answer to the puzzle is: the least number of traitors is one, and then it is either Akaroa or Darlinghurst.
H: How to show that whether $[ \frac{-p}{q}] =-[ \frac{p}{q}] $ or not? How to show that whether $[ \frac{-p}{q}] =-[ \frac{p}{q}] $(1) or not? I think it does not hold, because if p=3 and q=5, then by Euler criterion you get that $3^{\frac{5-1}{2}}= 3^2=9 \equiv 4 \equiv -1 \pmod 5$. So $-[ \frac{p}{q}]=-(-1)=1$. Now $[ \frac{-p}{q}]= [ \frac{-1}{q}][ \frac{p}{q}]= [ \frac{-1}{5}][ \frac{3}{5}]=1 \cdot (-1) = -1 \neq 1$. So it seems this (1) does not hold. Can you prove that (1) does not hold? AI: By the "first supplement" to quadratic reciprocity, $\left[\frac{-1}{q}\right]=(-1)^{\frac{q-1}{2}}$. Hence $\left[\frac{-p}{q}\right]=\left[\frac{-1}{q}\right]\left[\frac{p}{q}\right]=-\left[\frac{p}{q}\right]$ exactly when $\left[\frac{-1}{q}\right]=-1$, which happens exactly when $q\equiv 3\pmod{4}$, assuming $p,q$ are odd primes. Also, the notation $\left(\frac{p}{q}\right)$ is preferred to $\left[\frac{p}{q}\right]$.
H: Find the area of a circle that is NOT covered by the rectangle Using the following image for a visual: Is there a formula or equation I can use to find the area of the circle NOT overlapped with the rectangle (i.e. the filled in orange part)? I know all of the coordinates & sizes of both the rectangle and the circle, I'm just not sure how to apply them. AI: You can break up the overlapping area into a segment and a triangle by drawing a diagonal line between the two intersection points and then the formulae for the areas of each of these shapes are very well known. Subtracting these areas from the area of the whole circle will then give you your answer.
H: Does a continuous embedding preserve gaps between subspaces? I have a separable, reflexive Banach space $(V,\|\cdot\|)$ that is continuously and densely embedded in a Hilbert space $(H,|\cdot|)$. This means, there is a bounded linear injection map $j\colon V \to H$, i.e. $|j(v)|\leq C_j\|v\|$, and $j(V)$ is dense in $H$. Then, one writes $V\hookrightarrow H$, or simply $V \subset H$, having identified $V$ with $j(V)$. Now I assume that there are (closed) subspaces $V_1$, $V_2$ of $V$, such that $V = V_1 \oplus V_2$. QUESTION: Do the assumptions above imply, that the minimum angle between the closures of $j(V_1)=:H_1$ and $j(V_2)=:H_2$ in $H$, that are subspaces of $H$, is bounded away from zero? I have shown that $H_1 \cap H_2 = \{0\}$, so that - by definition - the cosine of the minimum angle is given as $$ c:= c(H_1,H_2):=~ \sup ~\{ (h_1,h_2) ~: ~h_1 \in H_1,~ |h_1|=1,~ h_2 \in H_2,~ |h_2|=1 \} .$$ Thus, I want to have, that $c<1$, if $V\hookrightarrow H$, $V=V_1\oplus V_2$, and $H_1 = \overline{V_1}^H$ and $H_2 = \overline{V_2}^H$, where the overline denotes the closure of $j(V_i)$ in $H$, $i=1,2$. Here are some equivalent conditions for gaps between subspaces. Let V be a Banach space, and $V_1$ and $V_2$ subspaces with $V_1 \cap V_2 = \{0\}$. Then the statements $ V_1 \oplus V_2$ is closed $V_1^\bot \oplus V_2^\bot \subset V^*$ is closed $\gamma(V_1,V_2):= \inf_{u\in V_1} \frac{dist(u,V_2)}{\|u\|} > 0$ are equivalent. Furthermore, $\gamma > 0$ implies $dist(u,V_2) \geq \gamma \|u\|$, for all $u\in V_1$. The $dist$ function is defined as $dist(u,V)=\inf_{v\in V}\|u-v\|$. These results can be found in Kato's book Perturbation Theory for Linear Operators in Chapter IV.4. With respect to the minimum angle $c$, one has: Let $H$ be a Hilbert-space and $H_1 \subset H$ and $H_2 \subset H$ subspaces. Then the statements $c(H_1,H_2) < 1$ $H_1 \cap H_2 = \{0\}$ and $H_1 \oplus H_2$ is closed there is a constant $\rho > 0$, such that $|h_1 + h_2| \geq \rho |h_2$, for all $h_1 \in H_1$ and $h_2 \in H_2$ $\inf \{ dist(h_1, H_2) : h_1 \in H_1, |h_1|=1 \} > 0$ are equivalent. These results are given in Galantai's book Projectors and Projection Methods on page 249. Any help, hint, and counterexample is highly appreciated. AI: These assumptions are not sufficient. Suppose that $V_1$ is a Banach space continuously and densely embedded into $H$ by the map $j_1 : V_1 \to H$, with $H_1 := j_1(V_1) \subsetneq H$. (For instance, you could let $H = L^2([0,1])$, and $V_1 = C([0,1])$, or perhaps $V_1 = H^1([0,1])$ if you'd like a Hilbert space). Fix some $y \in H \setminus j_1(V_1)$ (e.g. your favorite discontinuous function in $L^2([0,1])$). Set $V_2 = \mathbb{R}$ and let $V = V_1 \oplus V_2$ be their direct sum under the norm $\|(x,\alpha)\|_{V}^2 = \|x\|_{V_1}^2 + |\alpha|^2$ (or anything equivalent). Then embed $V$ into $H$ by the map $j(x,\alpha) = j(x) + \alpha y$. $j$ is continuous, and we have $H_1 \subset j(V)$ so that $j(V)$ is dense. But $H_1$ is already dense in $H$, so its closure contains $H_2 := j(V_2) = \mathbb{R} y$. Thus the minimum angle between $H_1$ and $H_2$ is zero.
H: $K[x_1, x_2,\dots ]$ is a UFD I wonder about how to conclude that $R=K[x_1, x_2,\dots ]$ is a UFD for $K$ a field. If $f\in R$ then $f$ is a polynomial in only finitely many variables, how do I prove that any factorization of $f$ in $R$ only have factors in these indeterminates, i.e. takes place in the UFD $K[x_1, x_2,\dots , x_n]$ for some $n$? Somebody argued that $f$ can not have one unique factorization in $K[x_1, x_2,\dots, x_n]$ and another in $K[x_1, x_2,\dots, x_n, \dots, x_m]$ I don't understand this. Do all primes in $K[x_1, x_2,\dots, x_n]$ necessarily stay prime in $R=K[x_1, x_2,\dots ]$? How do we know that the irreducible/prime elements in $K[x_1, x_2,\dots, x_n]$ stays irreducible in $R=K[x_1, x_2,\dots ]$? AI: Note that $K[x_1,x_2,\dots]=\bigcup_{n\in \Bbb N}K[x_1,x_2,\dots,x_n]$. I'm going to denote these rings in the union by $R_n$ to save typing time. To see that an irreducible $p$ of $R_n$ is irreducible in $R_m$ for $m\geq n$, suppose you have an equation $p=ab$ where $a,b\in R_m$. By evaluating $x_1,\dots, x_n$ all at 1, the variables in $p$ disappear, and you get an equation $\lambda=\overline{ab}\in K[x_{n+1}\dots, x_m]$ where $\lambda\in K$. This is a contradiction unless $a$ and $b$ already fell in $R_n$ in the first place, so that they had no variables above $x_n$. But you already know that in $R_n$, one of $a$ or $b$ must be a unit, so $p$ is irreducible in $R_m$ as well. So between the rings, primes stay prime and irreducibles stay irreducible. This allows you to conclude that an element has a prime factorization in the first place. Then you can argue that any two factorizations of a single element into primes must consist of elements in a common $K[x_1,\dots, x_m]$, which will force the factorizations to be equivalent.
H: Solving the next Integral (by parts?) Can anyone give me a hint / help with the next integral? Thanks! $$\displaystyle\int_{0}^{t}{x^{a-1}(t-x)^{b-1}dx}$$ AI: This integral equals $t^{(a+b-1)}\frac{\Gamma(a)\Gamma(b)}{\Gamma (a+b)}$ Take $y=xt$ Then we have, $\displaystyle\int_{0}^{t}{x^{a-1}(t-x)^{b-1}dx}=t^{(a+b-1)}\displaystyle\int_{0}^{1}{y^{a-1}(1-y)^{b-1}dx}=t^{(a+b-1)}\frac{\Gamma(a)\Gamma(b)}{\Gamma (a+b)}$
H: Definition by Recursion and a Question about Induction I have some questions to ask. Suppose I want to define some sequence of propositional formulas $\{\varphi_{j}\}_{j\in\mathbb{N}}$. First, I define it this way. Fix an enumeration $p_{1},p_{2},\ldots$ of propositional variables. For any $j\in\mathbb{N}$, define $\varphi_{j}$ recursively as follows. \begin{align*} \varphi_{1}&=p_{1}\\ \varphi_{j+1}&=\varphi_{j}\wedge p_{j+1}. \end{align*} Now, I don't want to specify it recursively. This time I want to say simply this way: $$\mbox{For each}\ j\in\mathbb{N},\ \mbox{define}\ \varphi_{j}=p_{1}\wedge p_{2}\wedge\cdots\wedge p_{j}.$$ The question is: is the latter way of defining $\{\varphi_{j}\}_{j\in\mathbb{N}}$ ambiguous? While one can easily get a sequence $p_{1},p_{1}\wedge p_{2},p_{1}\wedge p_{2}\wedge p_{3},\ldots$ after sticking to the former way of defining, I wonder if it is possible that one may misinterpret the latter way of defining---that it produces something different from what the former does. Which way of defining---recursively, or bluntly---is more acceptable? From the definition of $\{\varphi_{j}\}_{j\in\mathbb{N}}$ defined recursively above, I want to show that, for each $j\in\mathbb{N}$, $\varphi_{j+1}\models\varphi_{j}$ but $\varphi_{j}\not\models\varphi_{j+1}$. (Here $\models$ means logically implies and $\not\models$ means does not logically imply). By quick inspection, one can see that $\varphi_{j+1}\models\varphi_{j}$ since a conjunction must logically imply one of its constituents. And one can easily arrive at $\varphi_{j}\not\models\varphi_{j+1}$ too since $p_{j+1}$ does not appear in $\varphi_{j}$, and so there is some truth assignment satisfying $\varphi_{j}$ but not $p_{j+1}$. However, the point is: do I have to use proof by induction? I have tried, but the proof is not complicated and it does not even rely on the induction hypothesis. The induction hypothesis is never needed. Does this mean I can prove the claim directly? Or what?) Thank you in advance. AI: 1) Both ways are perfectly acceptable. If you don't like to use dots (which could be considered ambiguous but are generally admitted), you can write $$\varphi_j=\bigwedge_{1\leq i\leq j} p_i.$$ 2) You don't need the proof by induction, you can just fix any $j\in\mathbb N$, and explain your reasoning at this level. For the first way it is the rule of conjunction elimination, and for the second you describe the truth assignment satisfying $\varphi_{j}$ but not $\varphi_{j+1}$.
H: Is there some nomenclature to get the integer value of a fraction? There is some math nomenclature to represent the integer value of a fraction? Say, $$x \in \mathbb{R},\, \textbf{foo}(x) = \text{integer part of }x$$ Then $$x = 1.823,\, \textbf{foo}(x) = 1$$ AI: The integer part of $x$ is given by the somewhat ugly-looking $$\text{sgn}(x)\lfloor |x| \rfloor$$ which some authors abbreviate as $[x]$. $\text{sgn}(x)$ denotes the signum function. The floor function $\lfloor\cdot\rfloor$works as expected for positive $x$, but $\lfloor -1.5\rfloor=-2$, which is not the expected $-1$ integer part of $-1.5$.
H: Simmetric complex matrices I must show a counter example to the sentence "every simmetric complex matrix is diagonalizable." But Im having issues on guessing one of them. Can someone help me? AI: As you have found a counterexample, I can give you a full answer now. Just try this: $$ A=\pmatrix{i&1\\ 1&-i}. $$ Since $A$ is nonzero but $A^2=0$, $A$ is not diagonalisable. The statement that "every complex symmetric matrix is diagonalisable" is certainly false, because every complex square matrix is similar to a complex symmetric matrix. (See Theorem 2.1.4 of Olga Ruff's Master thesis, for instance.)
H: Why are only fractions with denominator 2 and 5 non-repeating? Given a rational number $\frac{n}{d}$, I understand that in the base $10$ number system, the number can be represented as a non-repeating decimal number if and only if $d$ has only prime factors of $2$ and $5$. I have a hypothesis that the reason for this is because $2$ and $5$ are the prime factors of $10$, which is the base for our number system. So according to my hypothesis, on a base $6$ number system, a rational number could be represented as a non-repeating decimal if and only if the denominator's only prime factors are $2$ and $3$. But this is just a hypothesis, I don't know exactly why it is true. I'm not sure what the connection is between the prime factors of the base, and whether or not a rational number will be a repeating decimal. Can someone demonstrate why this holds true? AI: If $b$ is the base, then a non repeating number $n/d$ is of the form $$ \frac{n}{d}=\frac{a}{b^k} $$ where I assume that $n$ and $d$ have no common factors. Also we can assume $k>0$ or the fraction $n/d$ would already be an integer. This means $$ nb^k=ad. $$ If $p$ is a prime factor of $d$, then $p$ divides $nb^k$, but it can't divide $n$ so it must divide $b^k$, hence $b$. Here I use the well known property of prime integers: if $p$ is prime and $p$ divides $xy$, then $p$ either divides $x$ or $y$.
H: What (filtered) (homotopy) (co) limits does $\pi_0:\mathbf{sSets}\to\mathbf{Sets}$ preserve? Consider the functor $\pi_0:\mathbf{sSets}\to\mathbf{Sets}$. $\pi_0$ does not preserve arbitrary limits $\pi_0$ does not send homotopy limits to limits $\pi_0$ does preserve filtered colimits $\pi_0$ does preserve arbitrary colimits Does $\pi_0$ send homotopy colimits to colimits? I think, this is true. Does $\pi_0$ preserve filtered limits? If not, what is a counterexample? Does $\pi_0$ send filtered homotopy limits to filtered limits? If not, what is a counterexample? AI: $\pi_0 : \mathbf{sSet} \to \mathbf{Set}$ is a left adjoint (exercise), so it preserves all colimits. It's also a left Quillen functor, so it even preserves homotopy colimits. There's no reason to believe anything good happens with (homotopy) limits, filtered or not, but it is true that finite (homotopy) products are preserved.
H: Unbiasedness of product/quotient of two unbiased estimators An answer to this question might just be "it depends", however I am wondering: Given unbiased estimators $\hat{\mu}_X$ and $\hat{\mu}_Y$ for the means $\mu_X$ of $X$ and $\mu_Y$ of $Y$ respectively. Under what conditions are the following true: $$E[\hat{\mu}_X\hat{\mu}_Y]=\mu_{XY}$$ or $$E[\frac{\hat{\mu}_X}{\hat{\mu}_Y}]=\mu_{\frac{X}{Y}}$$ Is the first one always true when $X \perp\ Y$? I don't have any guesses yet about the second part. Thank you. AI: You need to impose more restrictive conditions, even for the first result. Suppose $X$ and $Y$ are orthogonal (even more: independent) with zero mean, so $E(XY)=0$. Take the unbiased estimators $\hat\mu_X=X_1 + Y_1$, $\hat\mu_Y=Y_1$ Then $E[\hat{\mu}_X \hat{\mu}_Y]=\sigma_Y^2 \ne 0$. A sufficient requirement of the property to hold would be that $X$ $Y$ are independent and $\hat{\mu}_X$ ( $\hat{\mu}_Y$)is a function of samples of $X$ ($Y$) only; which would imply that the estimators themselves are independent. The independence condition could be relaxed to orthogonality in some cases (eg if the estimators are linear). The same conditions would even not be enough for the second. One would like to write $$E[\frac{\hat{\mu}_X}{\hat{\mu}_Y}]=E[\hat{\mu}_X]E[\frac{1}{\hat{\mu}_Y}]= \mu_X \frac{1}{\mu_Y} = E[\frac{X}{Y}]$$ but all those equalities are false in general. Assuming the conditions above (both $X$ $Y$ and the estimators are independent), then we'd need in adition that this holds: $E[1/Y] = 1/E[Y]$ (and the same for the estimator). This is false in general - it can be approximately true if $Y$ has small variance.
H: Please help me to prove this inequality: $|x|+|y|≥|x+y|$ Please help me to prove the following inequality: $|x|+|y|\geq|x+y|$ in which $x$ and $y$ are real numbers. Any help or hint would be appreciated. Thanks :) AI: Hint: If $w$ and $z$ are real numbers, $w,z \in \mathbb{R}$, (and not sets, as implied by your elementary-set-theory tag) then there are two cases: Either $w$ and $z$ have the same sign (both negative or both non-negative) or they don't (one is negative and the other non-negative). If $w$ and $z$ have the same sign, what happens to $w+z$? What if they have opposite signs? What's the largest $|w+z|$ can be? Hint: If $w$ and $z$ are complex numbers, $w,z\in \mathbb{C}$, then let $w=r_1 e^{i\theta_1}$, $z=r_2 e^{i\theta_2}$. What happens to $|w+z|$ if $\theta_1 \equiv \theta_2\ (\operatorname{mod}\ 2\pi)$? What if $\theta_1 \not \equiv \theta_2\ (\operatorname{mod}\ 2\pi)$? What is the largest $|w+z|$ can be? Hint: If $w$ and $z$ are sets, as not-stated in your question, by implied by your tag, then two cases occur again. Case 1: suppose $w$ and $z$ share at least one element. Case 2: suppose $w$ and $z$ do not share any elements. What is the largest $|w+z|$ can be? Hint: If $\mathfrak{w}$ and $\mathfrak{z}$ are ideals of a ring $R$, then what you say is not true. Meta hint: You should include more information in your questions.
H: Solve a system of linear congruences I have this system: $$ \begin{align} a_{11} x_1 + a_{12} x_2 + \ldots + a_{1n} x_n &= b_1 \mod p \\ a_{21} x_1 + a_{22} x_2 + \ldots + a_{2n} x_n &= b_2 \mod p \\ \vdots \\ a_{n1} x_1 + a_{n2} x_2 + \ldots + a_{nn} x_n &= b_n \mod p \\ \end{align} $$ Can I solve it using ordinary Gaussian elimination? It seems to be incorrect to multiply a row by a constant, and then add or subtract this row with another, right? If Gaussian elimination can't be applied here, then what other technique can I use? AI: When p is a prime number, the integers modulo p still form a field (as the rationals and the reals do). The Gaussian elimination works over any field, so in this case as well. Just be sure that you understand how to invert elements mod p and how to test for zero mod p. For general p (that is, p is not a prime number), you can still make something similar to Gaussian elimination. Just bring your coefficient matrix into Smith normal form over the integers, which is a diagonal matrix. This makes solving the resulting equations mod p trivial. The Smith normal form is described here, for example: https://en.wikipedia.org/wiki/Smith_normal_form. The problem is even solvable in case the modulus on the right hand side is not the same for each equation. Given $$ a_{11} x_1 + \dots + a_{1m} x_m = b_1 \pmod {p_1} \\ \vdots\\ a_{n1} x_1 + \dots + a_{nm} x_m = b_n \pmod {p_n}, $$ rewrite the system as $$ a_{11} x_1 + \dots + a_{1m} x_m + p_1 y = b_1 \\ \vdots\\ a_{n1} x_1 + \dots + a_{nm} x_m + p_n y = b_n, $$ which is in the variables $x_1, \dots, x_n, y$. Solve this system over the integers (using Smith normal form). Finally project the resulting solutions $(x_1, \dots, x_n, y)$ to $(x_1, \dots, x_n)$.
H: little problem about open set in the definition of topology Definition 1 Let $X$ be a set of points. A collection of subsets $U = \left\{U_{\alpha }\right\}$ forms a topology on $X$ if Any arbitrary union of the $U_{\alpha }$ is another set in the collection $U$. The intersection of any finite number of sets $U_{\alpha }$ in the collection $U$ is another set in $U$. Both the empty set $\emptyset$ and the whole space $X$ must be in $U$. The pair $(X, U)$ is called a topological space. The sets $U_{\alpha }$ in the collection $U$ are called open sets. My Question Is a finite isolated points set an open set? At present I think it is a closed set. Does Definition 1 suit for a finite isolated points set? It seems it just call $U_{\alpha }$ open sets without defining? AI: To answer your first question: Finite sets can be open, closed, both or neither depending on your topology. I'll give some examples of topological spaces where these possibilities occur in the form $(X,U)$. For $X$, I will always choose $\mathbb{R}$, but my $U$ will vary: $(\mathbb{R},\{\varnothing,\{0\},\mathbb{R}\})$ - In this topological space $\{0\}$ is open but not closed. $\mathbb{R}$ with its usual (metric) topology - In this space any finite set is closed, but not open. $(\mathbb{R},2^\mathbb{R})$, the so called discrete topology on $\mathbb{R}$ - In this topology every set (finite or infinite) is open and closed. $(\mathbb{R},\{\varnothing,\mathbb{R}\})$, the so called indiscrete topology on $\mathbb{R}$ - In this topology the only open sets as well as the only closed sets are $\varnothing$ and $\mathbb{R}$, hence all finite non-empty sets are neither open nor closed. It is a good exercise to check that all the above are actually topological spaces by your definition. To answer your second question: I think you are confused about what is being defined. Definition 1 is a definition of topological spaces, not of open sets.
H: Is there a symbol for matrix multiplication operator? Title says it all. Is there any specific operator symbol for matrix multiplication? Not just write down side by side but symbols like cross ($\times$). AI: Juxtaposition is the standard notational convention (to "write side by side") without an intermediary operation symbol): for matrices $A, B$ on which matrix multiplication is defined, write $AB$. Some texts may use the "dot" $A\cdot B$, but juxtaposition is more typical. See the entry in Wikipedia: Matrix Multiplication to disambiguate "standard" matrix multiplication from other matrix products: Hadamard product $A\circ B$, Frobenius product $A:B$, and Kronecker product $A\otimes B$. You'll see the "dot" and "cross" typically used with vectors (which are, also, matrices) to distinguish between the operations of the dot product and the cross product.
H: Are there any distinct finite simple groups with the same order? In finite group theory, it's often quite easy to show that there are no simple groups of a given order $n$. My question is different: is there some natural number $n$ such that there are two non-isomorphic simple groups of order $n$? The two easiest families of finite simple groups - $\mathbb{Z}_p$ for $p$ a prime and $A_n$ for $n\ge5$ - clearly don't yield any examples. It should be possible to check this using the classification of finite simple groups, but I don't know enough about the fabled 'sixteen infinite families of groups of Lie type' to do the check myself. Also, if the answer is 'no', I'd be much more interested to see an elementary(ish) proof than I would be to see a proof that relies on over 3,000 pages of non-stop hard group theory. AI: The links in the comments already give very good answers, but here is something similar: There are very few coincidences amongst the orders of the various finite simple groups in the various families. Many of these coincidences are explained by exceptional isomorphisms for groups that have more than one characteristic:$ \newcommand{\PSL}{\operatorname{PSL}} \newcommand{\PSU}{\operatorname{PSU}} \newcommand{\PSp}{\operatorname{PSp}} \newcommand{\PSO}{\operatorname{P\Omega}} $ Order 60, $A_5 \cong \PSL(2,4) \cong \PSL(2,5)$ Order 168, $\PSL(2,7) \cong \PSL(3,2)$ Order 360, $A_6 \cong \PSL(2,9)$ Order 20160, $A_8 \cong \PSL(4,2)$ Order 25920, $\PSO(5,3) \cong \PSU(4,2)$ Also one has various low dimension isomorphisms in general: $\PSU(2,q) \cong \PSL(2,q)$ $\PSO(5,q) \cong \PSp(4,q)$ But also a slightly weird one that is only an isomorphism in characteristic 2: $\PSO(2n+1,2^f) \cong \PSp(2n,2^f)$ This slightly weird one forces the orders of $\PSO(2n+1,q)$ and $\PSp(2n,q)$ to be the same for all $q$, however. So we get the order coincidence: $|\PSO(2n+1,q)| = |\PSp(2n,q)|$, but $\PSO(2n+1,q) \not\cong\PSp(2n,q)$ for odd $q$ The only other order coincidence is the very special $A_8 \cong \PSL(4,2)$ versus $\PSL(3,4)$ of order 20160. These results are proved for the classical and exceptional groups in Artin (1955). The original observation of the two distinct simple groups of order 20160 is from Schottenfels (1899). While Artin's technique work for all groups of Lie type (the twisted types are no hard), it might be nice to see a post CFSG version in Garge (2005), which also handles direct products of simple groups and knows the orders of all sporadic simple groups. Schottenfels, Ida May. “Two non-isomorphic simple groups of the same order 20,160.” Ann. of Math. (2) 1 (1899/00), no. 1-4, 147–152. MR1502265 DOI:10.2307/1967281 Artin, Emil. “The orders of the classical simple groups.” Comm. Pure Appl. Math. 8 (1955), 455–472. MR73601 DOI:10.1002/cpa.3160080403 Garge, Shripad M. “On the orders of finite semisimple groups.” Proc. Indian Acad. Sci. Math. Sci. 115 (2005), no. 4, 411–427. MR2184201 DOI:10.1007/BF02829803
H: How do you invert a characteristic function, when integral does not converge? I need to find the probability density of some distribution with characteristic function given by: $$\frac{1}{9} + \frac{4}{9} e^{iw} + \frac{4}{9} e^{2iw}$$ I know the formula for inverting a characteristic function is: $$f_X(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \phi(\omega) e^{-i\omega x} \mathop{d\omega}$$ But obviously putting this function inside the formula, will make the integral diverge. So my question is how does one invert a characteristic function, when this integral diverges? Or is this supposed to always converge and there is something wrong with my characteristic function? AI: The inversion formula you cite is restricted to integrable characteristic functions. The case in your question is $\varphi_X(\omega)=\sum\limits_{k=1}^np_k\mathrm e^{\mathrm i \omega a_k}$ with $p_k\gt0$ and $\sum\limits_{k=1}^np_k=1$, which is never integrable. Assume that $n=1$, that is, that $\varphi_X(\omega)=\mathrm e^{\mathrm i \omega a}$. Can you identify the distribution of $X$ in this case? Hint: there is no density. Then the general case might be straightforward.
H: How to solve this polynomial? How to get the solution for this polynomial? If $x/y + y/x = -1$ where $x$ and $y$ are not equal to zero, then what would be the value of $x^3 -y^3$ AI: Hint: $x^3 - y^3 = (x - y)(x^2 + y^2 + xy)$. Factor out $xy$ from the second term.
H: Understanding Formulas in First Order Logic I'm reading a text on Set Theory which states that any formula, say $\phi$, is ultimately built up from atomic sentences of form $x \in y$ and $x = y$ via the logical connectives. So then my question is as follows: suppose in reading this text I come upon a formula of form $\phi(x,y,z)$. Am I to interpret this formula as ultimately a claim about membership and equality? If $x,y,z$ are fixed, is $\phi(x,y,z)$ just really the claim $(x,y,z,1) \in \phi$ where $\phi$ is interpreted as a predicate function? If this post isn't clear, here is really the question I'm getting at: how am I supposed to interpret statements like $\phi(x,y,z)$ when reading a book concerning axiomatic set theory written in the language of first-order logic? AI: When you meet $\phi(x,y,z)$ then you know two things: $\phi$ is a formula, in this case written in the language of set theory which only includes $\in$ (and $=$ which is part of the logical symbols). $x,y,z$ denote the free variable which may (and usually do) occur in $\phi$. This means that $\phi$ can be something like $x\in y\lor x=z$. But it also means that $\phi$ can be something much much much more complicated: $$(\exists w(x\in w\rightarrow y\notin z)\lor\forall w(y\in w\rightarrow z\in x))\rightarrow\exists w\forall u(u\in x\leftrightarrow x\in w\land y\in u)$$ These formulas come in two common flavors: as a general statement whose content doesn't really matter for us right now, we may be interested in its structure (in particular in the quantifiers it has inside), but not in its content; and sometimes it will be a statement similar to "Let $\phi(x,y,z)$ be a formula saying that $x$ is an ordinal, and $y$ is a bijection from $x$ onto $z$", in which case this is exactly how you should understand it.
H: what is the meaning of "vector" in the context of algorithms? I'm familiar with the concept of vectors in math and physics, but this wording in a computer science textbook is unfamiliar to me. A configuration in the shared memory model is a vector $$C=(q_0,\dots, q_{n-1}, r_0,\dots, r_{m-1}) $$ AI: This use of "vector" just means something with a number of components, in a specific order. Mathematicians would usually call it a (finite) sequence.
H: What subsets of a covering space cover their image? Say I have a covering map $p \colon E \to B$. Then for which subsets $F$ of $E$, is $p|_F \colon F \to p(F)$ a covering map? If it makes things easier, assume $E$ is simply connected, that is, the universal cover of $B$. I was looking at the group of deck transformations (I don't know if they make sense in the most general settings) and it seems that sets of orbits under a fixed subgroup work. We should also be able to take union of such things, but I'm not sure under what conditions. AI: Any component of a preimage of a subset $A$ of $B$ is a cover of $A$, I believe (at least for open subsets). Taking the preimage of small subsets just gives you one copy of the small subset for each group element, and taking larger subsets gives you more interesting covering spaces. In fact, two preimages 'merge together' exactly when the original set contains a non-trivial loop. For instance, a simply connected subset of the torus lifts to infinitely many disjoint copies of itself, but a big patch that loops around a meridian and touches itself lifts to disjoint copies of an 'unwinding' of itself, an infinite set that is homotopic to a line. One problem with taking preimages of non-open subsets is that the open subset of $B$ corresponding to small neighborhoods in the non-open subset under the subspace topology may not be well-covered.
H: Can it happen that the image of a functor is not a category? On Hilton and Stammbach's homological algebra book, end of chap. 2, they wrote in general $F(\mathfrak{C})$ is not a category at all in general. But I don't quite get it. I checked the axioms of a category for the image, and I think they are all satisfied. Am I missing something? Thanks. AI: Consider the category $C$ with four objects, $a,b,c,d$ and, other than identity arrows, a single arrow $a\to b$ and a single arrow $c\to d$. Now consider the category $D$ with three objects $x,y,z$, and, aside from identity arrows, the arrows $x\to y$, $y\to z$, and $x\to z$. Now, consider the functor $F:C\to D$ with $F(a)=x$, $F(b)=F(c)=y$, and $F(d)=z$ (extended uniquely to arrows). Its image is not a category. This business is related to the fact that epis in $Cat$ are not so simple at all. In work of Isbell epis in $Cat$ are characterized. It's worth noting that regular epis, split epis, etc. in $Cat$ are quite different, attesting again to the subtlety of epis.
H: A group that has a finite number of subgroups is finite I have to show that a group that has a finite number of subgroups is finite. For starters, i'm not sure why this is true, i was thinking what if i have 2 subgroups, one that is infinite and the other one that might or not be finite, that means that the group isn't finite, or is my consideration wrong? AI: For each $g\in G$, you have the group $\langle g\rangle$. If either of these is infinite, it is isomorphic to $\mathbb Z$ and hence has infinitely many subgroup. So if we exclude this, $G$ is the union of finitely many finite sets.
H: Chinese Remainder Theorem and matrix In $\operatorname{SL}_2\left(\Bbb Z\right)$, theorem 3.2 in p.5, it states that (for an integer $N$ such that $(a,b,N) = 1$) there exists a $b' \equiv b \pmod{N}$ such that $(a,b')=1$ and can be done by using CNT. But I can't do this. I write $N=p_1^{e_1}\cdots p_k^{e_k}$ and use the fact that $(p_i^{e_i},p_j^{e_j})=1$ for $i \not= j$, and try to use CNT, but I don't know how to deal with the part "such that $(a,b')=1$," please help. AI: The key idea is the following simple Theorem $\,\ \ b+c\ $ is coprime to $\ a\:$ if every prime factor of $\,a\,$ divides $\,b\,$ or $\,c,\,$ but not both. Proof $\ $ If not, then $\,a\,$ and $\,b+c\,$ have a common prime factor $\,p.\,$ By hypothesis $\,p\mid b\,$ or $\,p\mid c.\,$ Wlog, say $\,p\mid c.\,$ Then $\,p\mid (b+c)-c = b,\,$ so $\,p\,$ divides both $\,b,c,\,$ contra hypothesis. $ $ QED Since we seek $\,b+nc\,$ coprime to $\,a,\,$ it suffices to choose $\,c\,$ such that each prime factor $\,p\,$ of $\,a\,$ divides exactly one of $\,b\,$ or $\,nc.\,$ Note $\,p\,$ can't divide both $\,b,n,\,$ else $\,p\mid a,b,n,\,$ contra hypothesis. Therefore it suffices to choose $\,c\,$ to be the product of primes in $\,a\,$ that do not occur in $\,b\,$ or in $\,n.\,$ This method of generating (co)primes by partitioning the prime factors of $\,a\,$ into two summands has an illustrious history, e.g. Stieltjes used it to generalize Euclid's classical proof that there are infinitely many primes: split the product $\: a\,$ of the prior primes into two products $\,b,c.\,$ Their sum yields an integer coprime to the prior primes, so its prime factors are new, i.e. not among the prior primes. Euclid's classic proof is simply the special case where $\, c = 1.$
H: Understanding the Definition of the Axiom Schema of Specification Consider the Axiom Schema of Separation: If $P$ is a property (with paramter $p$), then for any $X$ and $p$ there exists a set $Y = \{u \in X : P(u,p)\}$ that contains all those $u \in X$ that have property $P$. Now consider two potential interpretations of this axiom schema: (1) If $X$ is a set, and $P$ is any arbitrary property, then we can specify a subset of $X$ in which all of the members of this subset satisfy $P$. (2) If $X$ is a set, and $P$ is any arbitrary property that can be expressed in terms of a finite number of expressions involving only the relations of $\in$ and $=$ and the logical connectives, then we can specify a subset of $X$ in which all of the members of this subset satisfy $P$ given the constraints in bold. My question is am I correct in assuming that (2) is the proper way to understand this axiom schema, and that (1) and (2) are strictly distinct from each other? AI: The magnificent beauty of the axioms of $\sf ZF$ is that they allow us, with only $\in$ to express so much. When we say an arbitrary property, we mean one that can be expressed in the language of set theory. Otherwise it will be impossible to write the axiom relevant to that property in the language of set theory, which is the language of $\sf ZF$. You seem to forget that there are quantifiers to be used in the formulas. Not everything is boolean combinations of atomic formulas and their negations. No, we make heavy use of quantification here. For example $\subseteq$ can be defined as $x\subseteq y\iff\forall z(z\in x\rightarrow z\in y)$, and we can define when a set is transitive, $\forall y(y\in x\rightarrow\forall u(u\in y\rightarrow u\in x))$, or in shorter form, $\forall y(y\in x\rightarrow y\subseteq x)$. We can define $x\cup y$, and $x\cap y$, and more and more. Some of the formulas require us to rely on the axioms in order to prove their correctness, but that's fine. We are allowed to do that. But we can sit and write formulas which quickly become more and more complicated and those express a lot. A lot more than just $x\in y$ or $x=y$.
H: Show that V is a subspace of M2x2 Matrices and Determine a basis A bit of information to start us off: Let V denote the set of all 2x2 matrices with equal column sums. Show $V$ is a subspace of $M_{2\times 2}$ matrices: and.... Determine a basis for $V$: So for the first bit... $M$ is the set of all $2\times 2$ matrices, I used the definition of subspaces... A is a subset of B if for every element "a" in A, "a" is also in B. Am I able to then say, since $V$ is the set of all $2\times 2$ matrices with equal column sums, "$a$" being an element of $V$. Then "$a$" is a $2\times 2$ matrix. Since "$a$" is $2\times 2$ matrix, "$a$" is in $M_{2\times 2}$ as well. Therefore the set of all $2\times 2$ matrices with equal column sums is a subset of $M_{2\times2}$? What I am really unsure of is how to determine a basis. I'm used to being given a spanning set and working from there. Thanks! AI: Note that a $2\times2$ matrix A has equal column sums iff $e^T A (e_1-e_2) = 0$, where $e=(1,1)^T$. The operator $L(A) = e^T A (e_1-e_2)$ is a linear function of $A$, hence $V = \ker L$, and so is a subspace (The kernel of a linear operator is automatically a subspace). Since $\dim \mathbb{R}^{2 \times 2} = 4 = \dim \ker A + \dim {\cal R} A$, we see that $\dim V = \dim \ker A = 4-1 =3$. Intuitively, note that if you pick 3 elements of a $2 \times 2$ matrix, you can always choose the fourth to satisfy the equation. So, let the $A_{22}$ element be the 'special' element. A standard basis for $\mathbb{R}^{2 \times 2}$ is $e_i e_j^T$ for $i,j = 1,2$. So, pick one of these and figure out what the $22$ element must be to satisfy $L(A) = 0$. Working through the computation gives $L(e_i e_j^T-\alpha e_2 e_2^T) = \delta_{1j}-\delta_{2j} + \alpha$, hence $\alpha = \begin{cases} -1 & j=1 \\ +1 & j = 2\end{cases}$. This construction produces three non-zero matrices that lie in $V$ by construction. It is easy to show that these are linearly independent, hence the form a basis.
H: Estimation of the number of prime numbers in a $b^x$ to $b^{x + 1}$ interval This is a question I have put to myself a long time ago, although only now am I posting it. The thing is, though there is an infinity of prime numbers, they become more and more scarce the further you go. So back then, I decided to make an inefficient program (the inefficiency was not intended, I just wanted to do it quickly, I took more than 10 minutes to get the numbers below, and got them now from a sheet of paper, not the program) to count primes between bases of different numbers. These are the numbers I got $($below the first numbers are the exponents of $x$, I used a logarithmic scale, in the first line $\forall \ (n$-- $n+1)$ means $ [b^n; b^n+1[$ $)$ ---------0-1--2---3----4----5--6---7---8----9----10--11--12--13----------------- base 2: |0| 2| 2| 2| 5| 7| 13| 23| 43| 75|137|255|464| base 3: |1| 3| 13| 13| 31|76|198|520|1380|3741| base 10: |4|21|143|1061|8363| I made three histograms from this data (one for each base, with the respective logarithmic scales both on the $x$ and $y$ axes) and drew a line over them, that seemed like a linear function (you can try it yourselves, or if you prefer, insert these into some program like Excel, Geogebra, etc.). My question is: are these lines really tending (as the base and/or as x grows) to linear or even any kind of function describable by a closed form expression? AI: The prime number theorem is what you need. A rough statement is that if $\pi(x)$ is the number of primes $p \leq x$, then $$ \pi(x) \sim \frac{x}{\ln(x)} $$ Here "$\sim$" denotes "is asymptotically equal to". A corollary of the prime number theorem is that, for $1\ll y\ll x$, then $\pi(x)-\pi(x-y) \sim y/\ln(x)$. So yes, the number of primes start to thin out for larger $x$; in fact, their density drops logarithmically. To address your specific question, the PNT implies: \begin{align} \pi(b^{x+1}) - \pi(b^x) &\sim \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )}, \end{align} where \begin{align} \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^{x+1}}{x+1\ln(b)} - \frac{b^x}{x\ln(b)}\\ &= \frac{b^{x+1}}{(x+1)\ln(b)} - \frac{b^x}{x\ln(b)}\\ &=\frac{b^x}{\ln(b)}\left( \frac{b}{x+1} - \frac{1}{x} \right)\\ &=\frac{b^x}{\ln(b)}\left( \frac{ bx-(x+1) }{x(x+1)} \right)\\ &=\frac{b^x}{\ln(b)}\left( \frac{ x(b-1)-1) }{x(x+1)} \right)\\ \end{align} For $x\gg 1$, we can neglect '$-1$' next to $x(b-1)$ in the numerator and $1$ next to $x$ in the denominator, so that: \begin{align} \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^x}{\ln(b)}\left( \frac{ x(b-1) }{x^2} \right)\\ &=\frac{b^x(b-1)}{x\ln(b)}, \end{align} so that $$ \pi(b^{x+1}) - \pi(b^x) \sim \frac{b^x(b-1)}{x\ln(b)}. $$ As I'm writing this, I see that this is exactly the same as the answer that @Charles gave.
H: Difference between covariant and contraviant tensor Let we write a tensor, $$P_\mu A^\mu = - P_ \mu A_\mu $$ Where, P= momentum and A is vector potential. My query is, when we interchange the covariant and contraviant tensor, we get a negative sign, why is that? Is there any physical or geometrical significance it it? AI: While the left hand side of your equation is a scalar (which is a tensor of rank 0), the right hand side is not invariant under coordinate transformations; it is neither a tensor nor a scalar. Therefore the answer is: Because you happened to choose values of $A$ and $P$ so that the expression on the right hand side happens to be, in your current coordinate system, the negative of the expression on the left hand side. If you want to have meaningful expressions (where "meaningful" means "independent from your coordinate system", or "physical" when you're doing physics calculations), always remember the rule: Only contract an upper index with a lower index. Never contract two upper or two lower indices. On your right hand side, you contract two lower indices. Note that if you have two lower indices, you can raise the index using the metric, by the equation $$P^\mu = g^{\mu\nu}P_\nu$$ and then you've got an upper index to contract with the other, lower index. Of course, in your case you then recover the left hand side expression: $$g^{\mu\nu}P_\nu A_\mu=P^\mu A_\mu$$
H: derivative of parameter integral in $\mathbb C$ Let $f:\mathbb R\rightarrow\mathbb R$ be continuous and let $g(x):=xf(x)$ be absolutely integrable. Then $\widehat f'=-i\widehat g$. I know this would be true if I can differentiate in the integral of $\widehat f(w)=\int_{-\infty}^\infty f(x)\exp(-iwx)dx$. But when can I do that? I know in $\mathbb R$ that $g'(y)=\int_a^b\frac d{dy}f(x,y)dx$ holds if $f$ is continuously partial differentiable. But now my fourier transform is in $\mathbb C$. So are conditions the same? And why? AI: Fourier transformation is defined on real function - it's essential. The condition of partial derivative under the integral is uniformal convergence. So, you should show, that $\int\limits^{+\infty}_{-\infty}{\left|xf(x)e^{-iwx}\right|dx}$ converges uniformly. That's easy.
H: Is there always a prime number between $p_n^2$ and $p_{n+1}^2$? The following table indicates that there is a prime number p between the square of two consecutive primes. $$ \displaystyle \begin{array}{rrrr} \text{n} & p_n^2 & p_{n+1}^2 & \text{p} \\ \hline 1 & 4 & 9 & 7 \\ 2 & 9 & 25 & 23 \\ 3 & 25 & 49 & 47 \\ 4 & 49 & 121 & 113 \\ 5 & 121 & 169 & 167 \\ 6 & 169 & 289 & 283 \\ 7 & 289 & 361 & 359 \\ 8 & 361 & 529 & 523 \\ 9 & 529 & 841 & 839 \\ 10 & 841 & 961 & 953 \end{array} $$ Can anyone prove that for each natural number $n$ there is always a prime number $p$, such that $p_n^2<p<p_{n+1}^2$ ? AI: Maybe. Can we prove it? The answer has to be no. Since there may be an infinite number of primes $p_{n+1}-p_n = 2,$ and since we cannot now prove that there is a prime between $n^2$ and $(n+2)^2$ for all n, the answer seems clear-cut.
H: How can i compute this probablity? If we have: P(C) = 0.01 , P(!C) = 0.99 P(+|C) = 0.9 , P(-|C) = 0.1 P(-|!C) = 0.8, P(+|!C) = 0.2 How can i compute this probablity? P (C | (T1 and T2)). T1 and T2 are independent and T1 = +, T2 = +. This is where i've reached: P(+ and C) = P(C).P(+|C) = 0.01 x 0.9 = 0.009 P(+ and !C) = P(!C).P(+|!C) = 0.99 x 0.2 = 0.198 P(+) = P(+ and C).P(+ and !C) = 0.207 T1 and T2 are independent so: P(T1 and T2) = P(+ and +) = P(+) x P(+) = 0.207 x 0.207 P(C|(T2 and T2)) = P(C and (T1 and T2)) / P(T1 and T2) What's next? I don't know if i'm in the right place. AI: Represent the prior probability distribution as $(1,99)$ (We'll leave normalizing for the last step. Represent the likelihood function for "$+$" as $(9,2)$. Let "$\circ$" be term-by-term multiplication, thus $(a,b)\circ(c,d)=(ac,bd)$, and understand each pair to be equivalent to any of its scalar multiples, e.g. $(1,4)$ is the same as $(0.2,0.8)$. Then we have $$ (1,99)\circ(9,2)\circ(9,2)=(81,396)=\left(\frac{81}{477},\frac{396}{477}\right)\approx(0.1698,0.8302). $$ So that's the posterior probablity distribution, i.e. the conditional distribution given the two independent positive tests. Now the more long-winded version: \begin{align} \Pr(C\mid +\,+) & = \frac{\Pr(C\text{ and }+\,+)}{\Pr(+\,+)} \\[10pt] & = \frac{\Pr(C\text{ and }+\,+)}{\Pr((C\text{ and }+\,+)\text{ or }((\text{not C})\text{ and } +\,+))} \\[10pt] & = \frac{\Pr(C\text{ and }+\,+)}{\Pr(C\text{ and }+\,+) + \Pr((\text{not C})\text{ and } +\,+)} \\[10pt] & = \frac{\Pr(+\,+\mid C)\Pr(C)}{\Pr(+\,+\mid C)\Pr(C)+\Pr(+\,+\mid\text{not} C)\Pr(\text{not }C)} \\[10pt] & = \frac{(0.9)^2(0.01)}{(0.9)^2(0.01)+ (0.2)^2(0.99)} \approx 0.1698 \end{align}
H: Rolle's theorem for limits Given a function defined by $a_n(x) = \frac{d^n}{dx^n} e^{-x^2}$. Every function can be written as $a_n(x) = h_n(x) e^{-x^2}$ where $h_n(x)$ is a polynomial (the Hermite-Chebyshev polynomial of degree $n$ to be precise). I want to prove that every polynomial $h_n(x)$ has $n$ real valued roots. I am trying to prove this by induction. Since $a_0(x)$ is symmetric and has no roots, $a_1(x)$ has one root (Rolle's theorem). Now given $a_n(x)$ has $n$ roots. From Rolle's theorem I get that $a_{n+1}(x)$ must have at least $n-1$ roots. But there are two more. $a_{n+1}(x)$ has a root between $-\infty$ and the first root of $a_n(x)$ and between at the last root of $a_n(x)$ and $+\infty$. I know that $\lim_{x \to \pm \infty} a_n(x) = 0$. So my question is: Is Rolle's theorem valid for limits? When there is a $f(x_0) = \lim_{x \to \infty} f(x)$, does it necessarily follow that $f'(x) = 0$ somewhere in $[x_0, \infty)$? Can you prove this? AI: If $f$ is a differentiable function on the real line, and $f(x)\to 0$ as $x\to+\infty$ and $f(a)=0$ for some $a\in\mathbb R$ then $f'(b)=0$ for some $b\in(a,+\infty)$. Proof: Show that $f(x)$ must have a maximum or minimum on $(a,+\infty)$. Alternatively, let $g(x)= f(a-1+ 1/x)$ for $x\in(0,1]$ and $g(0)=0$. Then $g(x)$ is a continuous function on $[0,1]$ and $g(0)=g(1)=0$. But $g'(x)=\frac{-1}{x^2}f'(a-1+1/x)$ for $x\in(0,1)$, so we can apply Rolle's theorem to show there must be a $b\in(0,1)$ such that $g'(b)=0$. But then we conclude that $f'(a-1+1/b)=0$.
H: How to determine $\lim\limits_{x\to \pi/2 }\frac{\tan 2x}{x - \pi/2 }$? $$\lim_{x \to \pi/2 } \frac{\tan 2x}{x-\pi/2}$$ Could anyone help me with this trigonometric limit? I'm trying to evaluate it without L'Hôpital's rule and derivation. AI: Putting $x-\frac\pi2=y,$ as $x\to\frac\pi2,y\to0$ $$\lim_{x\to\frac\pi2}\frac{\tan2x}{x-\frac\pi2}=\lim_{y\to0}\frac{\tan 2(y+\frac\pi2)}y=\lim_{y\to0}\frac{\tan 2y}y\text{ as }\tan(\pi+2y)=\tan2y$$ $$=2\lim_{y\to0}\frac{\sin 2y}{2y}\frac1{\lim_{y\to0}\cos2y}=2$$
H: Determining bounds for the sum $\sum\limits_{n=1}^\infty \frac{1}{2^n - 3^n }$ I have to give low and high bounds for the following: $$ \sum_{n=1}^\infty \frac{1}{2^n - 3^n } $$ How do I determine an upper bound? How can I show this sum exists? edit: removed erroneous conclusion. AI: I strongly suggest not working with the sum as it stands. The terms are obviously negative. Change signs. Your intuition will get much better. For sure mine would. Find upper and lower bounds for the sign-altered sum, and then use the result to draw conclusions about upper and lower bounds for the original sum. To show that the sign-altered sum exists, we can note that $2^n\le \frac{2}{3}3^n$, and therefore $\frac{1}{3^n-2^n} \le \frac{3}{3^n}$. From this you can also get an upper bound for $\sum_1^\infty \frac{1}{3^n-2^n}$. As for a lower bound for $\sum_1^\infty \frac{1}{3^n-2^n}$, a very easy one is $0$! Almost as easy is to use the first term, which is $1$.
H: How to plot biconditional graphs? I'm using biconditional graphs for the lack of a better name, I don't know the name of it, if you know the real name, feel free to edit it. Draw the set of points given by: $|x-3|\leq1$ and $|y-2|\leq5$ I have evaluated the values for which the conditions are true and I obtained $\{x:2\leq x\leq 4\}$ and $\{y:-3\leq y\leq 7 \}$, now I'm trying to plot as if it were a function $f(x):=|x-3|\leq1$, then I'm using the values of $y$ for making a correlation with $x$, for example: $$\begin{eqnarray*} {f(2)}&:=&{|2-3|\leq1} \\ {}&:=&{1\leq 1 } \end{eqnarray*}\tag{1}$$ $$\begin{eqnarray*} {f(3)}&:=&{|3-3|\leq1} \\ {}&:=&{0\leq 1 } \end{eqnarray*}\tag{2}$$ $$\begin{eqnarray*} {f(4)}&:=&{|4-3|\leq1} \\ {}&:=&{1\leq 1 } \end{eqnarray*}\tag{3}$$ Thus $f(2)=1$, $f(3)=0$, $f(4)=1$, but I can't find a match, because there are not $0$'s nor $1$'s in the interval $2\leq x \leq 4$, is this correct? The secondary hypothesis I have is that this kind of operation is made to plot planes, not lines, then I should just draw a plane, is that it? AI: Note that you have two inequalities: solving one gives you the values of $x$ defined by the first inequality; solving the other gives you the values of $y$ defined by the second inequality. Neither inequality is a function, and so $x$ is certainly not a function of $y$, nor is $y$ a function of $x$. Try graphing the lines $y = -3$ and $y = 7$: those will be the horizontal bounds of the region defined by your inequalities. Then graph the the vertical lines $x = 2, x = 4$: giving you the bounded region of interest: the rectangular region with vertices: $$(2,-3),(2,7),(4,-3),(4,7)$$ Then simply "shade" the region bounded by these lines, inclusive of the lines. From Wolfram Alpha (though note the scale for the x, y axis are not matched. If you were to graph this to scale, you'd have a rectangle that is much "taller" than it is wide): ADDED: Nicer graph of the inequalities (in that it is properly scaled):
H: Question Involving Transitive Sets In Jech's Set Theory, we are asked to show the following two statements: 1.3 If $X$ is inductive, then the set $\{x \in X : x \subset X\}$ is inductive. Hence $N$ is transitive, and for each $n$, $n = \{n \in N > : m < n\}$. 1.4 If $X$ is inductive, then the set $\{x \in X : $x is transitive$\}$ is inductive. Hence every $n \in N$ is transitive. Yet haven't we showed $1.4$ the moment we have shown $1.3$ since $\{x \in X : x \subset X\} = \{x \in X : $x is transitive$\}$? AI: It’s not true in general that $\{x\in X:x\subseteq X\}=\{x\in X:x\text{ is transitive}\}$. My answer to your earlier question gives an example in which they’re not equal.
H: How to compute Legendre symbol $\Bigl(\frac{234987}{9086}\Bigr)$? How to compute $(\frac{234987}{9086})$? I know that Legendre symbol is $(\frac{p}{q})$ where $p \in \mathbb{Z}$ and $q$ is odd prime and Jacobi symbol is $(\frac{p}{n})$ where $p \in \mathbb{Z}$ and $n$ is odd integer. But in this case $n=9086$ is even. So is it still possible to use Jacobi symbol formula by making canonical presentation: $9086=2\cdot 5 \cdot 11 \cdot 59$ and then calculating the product of Legendre symbol of those primes? Especially I would like to know if there is need to use Quadratic reciprocity? I must make a point that although answer would be 1, it doesn't mean that $x^2 \equiv 234987 \pmod {9086}$ is solvable. AI: It seems that the intention of this problem is to use the Kronecker symbol. It generalizes the Jacobi and Legendre symbols so you can evaluate $\left(\frac{a}{n}\right)$ for all $n\in\Bbb Z$. If $n = u\cdot p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ is the prime factorization of $n$ with $u = \pm 1$ (a unit), then $$ \left(\frac{a}{n}\right) := \left(\frac{a}{u}\right)\prod_{i = 1}^k\left(\frac{a}{p_i}\right)^{e_i}, $$ where if $p_i$ is odd, then $\left(\frac{a}{p_i}\right)$ is the Legendre symbol, and \begin{align*} \left(\frac{a}{1}\right)&:= 1,\\ \left(\frac{a}{-1}\right)&:= \begin{cases}-1, \quad a < 0\\ 1, \quad a\geq 0\end{cases},\\ \left(\frac{a}{2}\right)&:= \begin{cases}0, \quad a\in 2\Bbb Z\\ 1, \quad a\equiv \pm 1\mod 8\\ -1,\quad a\equiv\pm 3\mod 8\end{cases},\\ \left(\frac{a}{0}\right)&:=\begin{cases}1, \quad a = \pm 1\\ 0, \quad a\neq\pm 1\end{cases}. \end{align*} It also satisfies the following reciprocity law: Suppose $n = 2^e n'$, $m = 2^f m'$ where $n', m'\in 2\Bbb Z + 1$ (for $n = 0$, $n' = 1$), and let $n^* = (-1)^{(n' - 1)/2}n$. Then if $n\geq 0$ or $m\geq 0$, we have $$ \left(\frac{n}{m}\right) = \left(\frac{m^*}{n}\right) = (-1)^{\left(\frac{n'-1}{2}\right)\left(\frac{m'-1}{2}\right)}\left(\frac{m}{n}\right). $$ So the Kronecker symbol shares many properties (which can be found at the wikipedia page) with the Legendre and Jacobi symbols (under certain restrictions), but one needs to be a bit careful, as certain things are different.
H: Finding the x-coordinate of the max point of $y = x\sqrt {\sin x} $ so that it satisfies the equation $2\tan x + x = 0$ The maximum point on the curve with equation $y = x\sqrt {\sin x} $, $0 < x < \pi $, is the point A, Show that the x-coordinate of point A satisfies the equation $2\tan x + x = 0$ I understand that I need to determine the stationary points on the curve by equating the first differential to 0, however I run into a problem once I find that differential and do that, here's what I've done: $\eqalign{ & y = x\sqrt {\sin x} = x{\left( {\sin x} \right)^{{1 \over 2}}} \cr & {{dy} \over {dx}} = {1 \over 2}x\cos x{\left( {\sin x} \right)^{ - {1 \over 2}}} + {\left( {\sin x} \right)^{{1 \over 2}}} \cr & {{dy} \over {dx}} = {1 \over 2}{\left( {\sin x} \right)^{ - {1 \over 2}}}\left( {x\cos x + 2\sin x} \right) \cr & 0 = {1 \over 2}{\left( {\sin x} \right)^{ - {1 \over 2}}}\left( {x\cos x + 2\sin x} \right) \cr} $ Where do I go from here? Even if I eliminate the sine expression outside the brackets I still have trouble doing anything with the expression inside the brackets due to the x attached to the cosine. What can be done here? How else can I satisfy this equation? Thank you! AI: Hint: rewrite the final expression as $$\frac{x\cos x + 2\sin x}{2(\sin x)^{\frac{1}{2}}} = 0$$. This is $0$ if and only if $x\cos x + 2\sin x = 0$, or dividing both sides by $\cos x$ (for $x \neq \frac{\pi}{2}$, which is clearly not a solution), we see that this is precisely $$x + 2\tan x = 0$$
H: A functional equation with inequality Find all (at least one) functions $f\colon \mathbb{R}\to \mathbb{R}$ (or show there is none), such that $$ f(x^3+x)≤x≤f(x^3)+f(x), \quad \text{for all $x\in \mathbb{R}$}. $$ This is a problem asked by a friend of mine, and I do not know how to proceed. Known easy facts: $f(0)=0$. It seems the problem can be split to $x>0$ and $x<0$ separately. Let $f_{-}(x)=-f(-x)$ for $x>0$, then $f_{-}(x^3+x)\ge x\ge f_{-}(x^3)+f_{-}(x)$. AI: Question: What is the value of $f(-2)$? If we let $x=-1$, we see that $f(-2)\le -1$. Next, let $a$ satisfy $a^3+a=-8$, and note that $a\approx -1.8$. Then let $x=-2$. We have: $$-2\le f(-8)+f(-2)=f(a^3+a)+f(-2)\le a+f(-2)$$ These inequalities give: $$-2-a\le-1\\-a\le1\\a\ge-1$$ a contradiction. No such function exists.
H: On the magnitude of vectors Imagine a vector a in three dimensional Cartesian coordinates, the vector's endpoint coordinates are ($X_a,Y_a,Z_a$). Now let's assume this vector is in standard form (the vector "begins" at the origin). The formula for the distance between the endpoint and the origin in Cartesian three-space is as follows where d represents distance and X,Y,Z are the coordinates. $d^2=X^2+Y^2+Z^2$ With that said is "d" equal to the magnitude of the vector, since the magnitude of a vector is represented by the length of the line segment? AI: Here is a (ugly) drawing: The $\color{blue}{base}$ of the triangle can be viewed as in the $xy$-plane and using Pythagoras' Theorem we get the length $L_B = \sqrt{x^2 + y^2}$. The $\color{green}{height}$ of the triangle has length $L_H = \sqrt{z^2}$. Therefore, using Pythagoras' theorem, the square of the length of $\color{red}{A = (x,y,z)}$ is $$L_A^2= L_B^2 + L_H^2 = \left(\sqrt{x^2 + y^2}\right)^2 + \left(\sqrt{z^2}\right)^2 = x^2 + y^2 + z^2 = D^2.$$ Hope this helps!
H: Graph with no even cycles has each edge in at most one cycle? As the title says, I am trying to show that if $G$ is a finite simple graph with no even cycles, then each edge is in at most one cycle. I'm trying to do this by contradiction: let $e$ be an edge of $G$, and for contradiction suppose that $e$ was in two distinct cycles $C_1$ and $C_2$ of $G$. By assumption, $C_1$ and $C_2$ must have odd length. Now I would like to somehow patch $C_1$ and $C_2$ together to obtain a cycle of even length, but I'm not sure how to do so. If $C_1$ and $C_2$ only overlap in $e$, or perhaps in a single path containing $e$, then this can be done, but I can't see how to make this patching work when $C_1$ and $C_2$ overlap in disjoint paths. Any help is appreciated! AI: Here is the rough idea: Suppose $C_1$ and $C_2$ overlap in at least two disjoint paths. If we follow $C_1$ along the end of one path to the beginning of the next path, and then follows $C_2$ back to the end of the first path, we obtain a cycle $C_3$. Since this cycle must have odd length, the parity of the two parts must be different. This means I can change the parity of the length of $C_1$ by following the $C_2$ part of $C_3$ instead of the $C_1$ part. This is also a contradiction, as $C_1$ has odd length. Explicitly, let $a$ be the last vertex of one path contained in both $C_1$ and $C_2$, and let $b$ be the first vertex of the next path contained in both $C_1$ and $C_2$. Let $C_3$ be the cycle obtained by following $C_1$ from $a$ to $b$, and then following $C_2$ back to $a$. Since $C_3$ has odd length, the parity of the length along $C_1$ from $a$ to $b$ must be different from the parity of the length of $C_2$ from $a$ to $b$. It is this difference in parity that allows us to modify $C_1$. That is, let $C_1'$ be the cycle that agrees with $C_1$ except for the path from $a$ to $b$, where it agrees with $C_2$. Then $C_1'$ will have even length.
H: Prove $S\cap S ^\bot=\{0\}$ Let $S$ be a nonempty subset of the inner product space $V$ and $S^\bot$ be the orthogonal complement of $S$. Prove $S\cap S ^\bot=\{0\}$ AI: If $s\in S$ is also in $S^{\perp}$ then by definition $\langle s, r\rangle=0$ for all $r\in S.$ Then taking $r=s$ gives $s=0.$
H: when changing order of integration Let $f,g$ and $f\cdot g$ be a functions with $\int_{-\infty}^\infty|h(x)|dx<\infty$ for $h=f,g,fg$. I have now given in a proof $$\int_{-\infty}^\infty\left(\int_{-\infty}^\infty f(u)g(w)\exp(-iwt)du\right)dw=\int_{-\infty}^\infty\left(\int_{-\infty}^\infty f(u)g(w)\exp(-iwt)dw\right)du$$ without saying why this equation is right. So when is this right? In other words: When can you change the order of integration? What are the conditions? And why? Add: I know for Lebesgue integrable functions the theorem of Fubini but above I don't have given functions in $\mathscr L^1$. AI: This is Fubini's Theorem. Both $f$ and $g$ are integrable functions and $e^{-iwt}$ has absolute value one (assuming $t$ is real). So $$ \int_{-\infty}^\infty\left(\int_{-\infty}^\infty |f(u)g(w)\exp(-iwt)|du\right)dw = ||f||_1 ||g||_1 $$ where $||\cdot||_1$ is the $L^1$-norm. First we can take out $g(w)$ since it is independent of $u$ and then take out $||f||_1$ since it is a constant: $$ \begin{split} &\int_{-\infty}^\infty\left(\int_{-\infty}^\infty |f(u)g(w)|du\right)dw = \int_{-\infty}^\infty |g(w)| \left(\int_{-\infty}^\infty |f(u)|du\right)dw \\ &= \int_{-\infty}^\infty |g(w)| ||f||_1 dw = ||f||_1 \int_{-\infty}^\infty |g(w)| dw = ||f||_1 ||g||_1 \end{split} $$ So $f(u)g(w)$ is thus integrable in $\mathbb{R} \times \mathbb{R}$ (but remember that this is different from $fg$ being integrable in $\mathbb{R}$ which was one your assumptions). I have by the way used the version of Fubini where one checks the iterated integrals (it is in the Wikipedia page).
H: Linear Programming - values of x and y yielding maximum values for the following problem: I found the following linear programming question in a past exam paper while preparing for an upcoming exam: "The daily production of a sweet factory consists of at most 100 kg chocolate covered nuts and at most 125 kg chocolate covered raisins which are then sold in two different mixtures. Mixture A consists of equal amounts of nuts and raisins and is sold at a profit of 5.00 per kg Mixture B consists of one third nuts and two thirds raisins and is sold at a profit of 4.00 per kg Let there be x kg of mixture A and y kg of mixture B: 1.) Find values for x and y that will give a maximum profit. 2.) Calculate the maximum daily profit." The question seems fairly simple and I arrived at an answer of x = 84.375 and y = 173.4375 with a maximum profit of 1115.63. These values seem to fulfill the constraints but do not concur with the memorandum, which unfortunately only gives the final answers (x = 150; y = 75; max. profit 1050) and not the method by which to determine them. In the face of an exam that's rumored to be insanely challenging I'm losing my mind over this stupid thing. Someone help please! Yours desperately Zombie-eyed Student AI: Here are the two constraints: $$\frac{x}{2}+\frac{y}{3}\le100\\\frac{x}{2}+\frac{2y}{3}\le125$$ Notice that your point doesn't satisfy the second constraint. Here is the graph of the two constraints: $\hspace{2in}$ The critical region is below both lines, so we check the vertices of the critical region which are $(0,0),(200,0), (0,\frac{375}{2}),$ and $(150,75)$. You will find that $(150,75)$ maximizes the profit function $5x+4y$.
H: How does the divisibility graphs work? I came across this graphic method for checking divisibility by $7$. $\hskip1.5in$ Write down a number $n$. Start at the small white node at the bottom of the graph. For each digit $d$ in $n$, follow $d$ black arrows in a succession, and as you move from one digit to the next, follow $1$ white arrow. For example, if $n = 325$, follow $3$ black arrows, then $1$ white arrow, then $2$ black arrows, then $1$ white arrow, and finally $5$ black arrows. If you end up back at the white node, $n$ is divisible by $7$. This method indeed works, I was wondering why this work and how could we generate such graph for an arbitrary integer $k$? AI: Suppose you want to make a divisibility graph for $n$. Make $n$ dots and label them $0, 1, \ldots n-1$. You will have a "current number" which will change as you walk around the graph. You will be at dot $i$ if the remainder you get from dividing the "current number" by $n$ is $i$. As you walk around the graph, the current number will change, and the dot will change to track the remainder from dividing the new current number. Initially, the "current number" is $0$, so you start at dot $0$. Black arrows represent the operation of adding 1 to the current number. Make a black arrow from each dot $i$ to $i+1$, and also from dot $n-1$ to dot $0$. White arrows represent the operation of multiplying the current number by 10. Make a white arrow from each dot $i$ to $10\cdot i\bmod n$. That is, from $i$ to whatever is the remainder is when you divide $10\cdot i$ by $n$. For example, if $i$ is 2 and $n$ is 7, you make a white arrow from dot 2 to dot 6 because the remainder after dividing $2\cdot10$ by $7$ is 6. Any current number can be generated starting from 0 by adding 1 and multiplying by 10 in the right order. For example, to get to 213, you start at 0 and: $$ 0\stackrel{+1}\to1\stackrel{+1}\to2 \stackrel{\times10}\to20\stackrel{+1}\to21 \stackrel{\times10}\to210\stackrel{+1}\to211\stackrel{+1}\to212\stackrel{+1}\to213$$ Starting at dot 0 and following the arrows gives you, at each stage, the result of dividing the current number by $n$. The black arrows track the remainder if you add 1 to the current number, and the white arrows track what happens to the remainder if you multiply the current number by 10. As you traverse the graph, you are keeping track of what the remainder would be at each stage if you divided the current number by $n$. After you've finished building up the current number into the number you want to check, the result is divisible by $n$ only if the remainder would be 0—that is, only if you're back at dot $0$ where you started. Here's an example, the corresponding graph for $n=8$. I colored the ×10 arrows blue instead of white, because blue is pretty: I made two sets of these, for $2\le n\le 20$, which are viewable online; please enjoy them and do what you like with them.
H: Proving tautologies using semantic definitions I know how to prove tautologies with truth tables, but not with semantic definitions. How can I prove, using the semantic definitions, that the following are tautologies? \begin{gather} \bigl(\exists x\,P(x) \lor \exists x\,Q(x)\bigr) \to \exists x\,\bigl(P(x) \lor Q(x)\bigr) \tag{1} \\ \bigl(\forall x\,P(x) \land \forall x\,Q(x)\bigr) \to \forall x\,\bigl(P(x) \land Q(x)) \tag{2} \end{gather} AI: In some circles, the term tautology is used to mean just those sentences that are true based on their truth functional connectives. Sentences in first-order logic that must be true based on the semantics of the logic other than truth-functional connectives are often called the validities of the language. A propositional example In any case, we're interested in semantic arguments for the truth of sentences. In the propositional case, we might ask for a semantic justification of the formula $P \to (P \lor Q)$. To demonstrate this with truth tables is straightforward, if somewhat tedious. To take the semantic approach, however, we look at the connectives in the sentence and what they mean. $(P \land R)\to (P \lor Q)$ is a conditional, and a conditional is true when the truth of the antecedent implies the truth of the consequent. The antecedent is $P \land R$, which is a conjunction. A conjunction is true if and only if both of its conjuncts are true. Its conjuncts are $P$ and $Q$. The consequent is $P \lor Q$, which is a disjunction. A disjunction is true if and only if at least one of its disjuncts are true. Its disjuncts are $P$ and $Q$. So, does the truth of the antecedent imply the truth of the consequent? The truth of the antecedent requires that $P$ be true, which means that at least one of $P$ and $Q$ is true, so the consequent must also be true. Therefore, the conditional is true. A first order example This type of reasoning carries over to the first-order case, but we must handle a few more types of sentences. Particularly, a first-order interpretation fixes a non-empty domain of discourse and maps each constant symbol to an element of the domain, each $n$-ary relation symbol to a $n$-ary relation on the domain. An interpretation $\cal I$ satisfies an expression $P(a)$ if and only if the set that the interpretation maps $P$ to contains the individual that the interpretation maps $a$ to, i.e., if $\cal I(a) \in \cal I(P)$. $\cal I$ satisfies a universal quantification $\forall x.\phi(x)$ if for it satisfies $\phi(x)$ for every $x$ in the domain, and $\exists x.\phi(x)$ if it satisfies $\phi(x)$ for some $x$ in the domain. Let's consider the sentence $\forall x.(P(x) \lor \lnot P(x))$. This is a first-order validity. Why? Well, it is true if for each $x$ in the domain, $P(x) \lor \lnot P(x)$ is true. This is true if either $P(x)$ is true or $\lnot P(x)$ is true. $P(x)$ is true if $x$ is in $\cal I(P)$. $\lnot P(x)$ is true if $P(x)$ is false. $P(x)$ is false if $x \not\in \cal I(P)$. Since $x$ must be in $\cal I(P)$ or not in $\cal I(P)$, at least one of these cases must be true, and so the sentence is true. As another example, consider $\forall x.P(x) \to \exists x.P(x)$. This is true if the truth of $\forall x.P(x)$ implies the truth of $\exists x.P(x)$. Without going through all the low level details, $\forall x.P(x)$ is true if every $x$ in the domain is in $\cal I(P)$. Since the domain is non-empty, that means that there is at least one $x$ such that $x \in \cal I(P)$, and that is enough to make $\exists x.P(x)$ true. Therefore, the truth of the antecedent implies the truth of the consequent, so the conditional is true. Remarks These kinds of explanations can be rather tedious in some ways, especially because they sometimes seem so obvious. We may be tempted to say, “well of course $P \land Q$ being true implies that $P$ is true and $Q$ is true; that's what $\land$ means!” But that's the whole point—logic systems have two sides: a syntactic side and a semantic side, and we have to be able to see the connection between the two. We tend to be pretty good at this when the syntax has symbols the meanings of which we understand, but with more complicated languages, it is important to be able to do this rigorously. Other times, we might be interested in exploring different possible semantics for languages. For instance, in standard first-order logic, an interpretation is based on a non-empty domain of discourse. If we consider a different semantics that allows empty domains of discourse, then semantic analysis will show that $\forall x.(P(x) \lor \lnot P(x))$ is a validity under both kinds of semantics, but $\exists x.(P(x) \lor \lnot P(x))$ is a validity only in the semantics that requires a non-empty domain of discourse. This means that sentences like $\forall x.(P(x) \lor \lnot P(x)) \to \exists x.(P(x) \lor \lnot P(x))$ will be true under one semantics and not the other. This means that the same set of inference rules will be sound for one semantics, but unsound for the other. We can only discover this if we can reason about these sentences semantically.
H: A very strange deck - probability and expected number of draws Say we have a virtual deck of 70 cards of four suits and each player has access to his/her own unique independent deck (one players' actions do not affect another player's): $$ \begin{array}{r|rr} &\text{probability}& \text{quantity of card}\\ \hline \text{Heart}&0.5\%& 1\\ \text{Spade}&5\hphantom{.0}\%& 9\\ \text{Diamond}& 50\hphantom{.0}\% & 30 \\ \text{Clubs}& 44.5\%& 30 \end{array} $$ Here's where it gets interesting: the probability of the card does not change as cards are pulled out (without replacement!). For example if I pull a club, the probability to draw another club remains 44.5%...same if I pull 2 clubs, still 44.5%. This holds until I pull all the clubs, at which point, there are no more clubs in the virtual deck and the probability drops to 0. How would we calculate: Expected number of draws it would take to draw a heart % of players who will draw a heart on the 0-69th draw Probability of an individual user drawing the heart on their last (70th) draw AI: Let $X$ = # of draws to draw a heart. Let's consider $P(X)$. Note that $X$ could be $1,2,...,70$. Since draws are independent (provided that cards don't run out), we have: $$ \begin{array}{c|c} X & P(X) \\ \hline \\ 1 & 0.005 \\ 2 & (0.995)(0.005) \\ 3 & (0.995)^2(0.005) \\ 4 & (0.995)^3(0.005) \\ \vdots & \vdots \\ 69 & (0.995)^{68}(0.005) \\ 70 & ? \end{array} $$ Note that the % of players who will draw a heart on the ($1$ to $69$)th draw is: $$ P(X<70)=\sum_{x=1}^{69} 0.005(0.995)^{x-1} = \dfrac{0.005(1-0.995^{69})}{1-0.995} = 1-0.995^{69}\approx 0.2924 $$ Thus, the probability of an individual user drawing the heart on their last ($70$th) draw is: $$ P(X=70)=1-P(X<70) \approx 1 - 0.2924 = 0.7076 $$ Hence, the expected number of draws is: $$ E(X) = \sum_{x=1}^{70} xP(x) = 70(0.7076) + \sum_{x=1}^{69} x[0.005(0.995)^{x-1}] \approx 59.2 $$
H: Can you use combinatorics rather than a tree for a best of five match? The Chicago Cubs are playing a best-of-five-game series (the first team to win 3 games win the series and no other games are played) against the St. Louis Cardinals. Let X denotes the total number of games played in the series. Assume that the Cubs win 59% of their games versus their arch rival Cardinals and that the probability of winning game is independent of other games. (a) calculate the mean and standard deviation for X. In this problem I think I understand how to calculate the mean and standard deviation, but first you have to calculate the total combinations of plays. Is there a way using combinations or permutations to derive all possibilities rather than drawing a tree? The tricky thing with this using combinatorics is that its a best of five. So I don't see how to use permutations directly because as soon as you have 3 wins the series is over--meaning you'll over count if you don't take this into consideration. I'd appreciate any feedback. Thank you! k=3 $$ \begin{array}{c|c|c} \text{Loss} & \text{Win} & \text{Combinations}\\ \hline 3 & 0 & 1\\ 2 & 1 & 3\\ 1 & 2 & 3\\ 0 & 3 & 1\\ \end{array} $$ Total Combinations = 8 k=4 $$ \begin{array}{c|c|c} \text{Loss} & \text{Win} & \text{Combinations}\\ \hline 1 & 3 & 4\\ 2 & 2 & 6\\ 3 & 1 & 4\\ 4 & 0 & 1\\ \end{array} $$ **Here have to subtract 1 from total combinations because cannot have WWWL. Total combinations = 15-1 = 14 k=5 $$ \begin{array}{c|c|c} \text{Loss} & \text{Win} & \text{Combinations}\\ \hline 2 & 3 & 10\\ 3 & 2 & 10\\ 4 & 1 & 5\\ 5 & 0 & 1\\ \end{array} $$ **Here have to subtract from total combinations because cannot have WWWLL and LWWWL. Total combinations = 26-2 = 24 AI: Let us solve a more general problem. Imagine we have a series between two teams, A and B. Team A wins a game with probability $p$, and we make the independence assumption of the OP. Let us suppose that the first team to win $w$ games wins the series. Let $X$ be the number of games. Then $w\le X\le 2w-1$. Yours is the case $w=3$. For any $k$ in this interval, we find the probability that $X=k$. In order for the series to have length exactly $k$, one of the teams has to have won $w-1$ of the first $k-1$ games, and has to win the last game. The probability that A wins the series in $k$ games is therefore $$\binom{k-1}{w-1}p^{w-1}(1-p)^{w-k}p=\binom{k-1}{w-1}p^w(1-p)^{k-w},$$ and the probability that B wins the series in $k$ games is $$\binom{k-1}{w-1}(1-p)^wp^{k-w}.$$ The events A wins the series in $k$ games and B wins the series in $k$ games are disjoint, so to find $\Pr(X=k)$ we add the two expressions above. Remark: A distribution closely related to the above calculations is the negative binomial. For details, Wikipedia has a discussion, as do many introductory probability books. For the case $w=3$, we don't need all this machinery, and can simply draw a tree. Let $p=0.59$. We have $\Pr(X=3)=p^3+(1-p)^3$. For $k=4$, we get $3p^3(1-p)+3(1-p)^3p$. For $k=5$ we get $6p^3(1-p)^2+6(1-p)^3p^2$. All these expressions can be simplified somewhat.
H: Irreducibility of locally closed set We use the Zariskii topology. Let $\phi:\mathbb C^n\rightarrow \mathbb C^m$ be a polynomial map and let $S\subset \mathbb C^n$ be a locally closed set (that is, $S$ is the intersection of an open set and a closed set). Assume that $\phi$ induces an isomorphism of $S$ onto a nonempty open subset. ${\bf Question:}$ Is it true that $S$ is irreducible, that is if $S=(C_1\cap S)\cup(C_2\cap S)$ with closed $C_1$ and $C_2$, then $C_1\cap S=S$ or $C_2\cap S=S$. ${\bf Solution (attempt):}$ By definition, a constructible set is a finite union of locally closed sets. Chevalley’s theorem states that the image of a constructible set under a polynomial map is constructible. Thus, if we assume that $S=(C_1\cap S)\cup(C_2\cap S)$, then both $\phi(C_1\cap S)$ and $\phi(C_2\cap S)$ are constructible. Since the open subset $\phi(S)$ is never a finite union of closed subsets, it follows that at least one of the sets $\phi(C_1\cap S)$, $\phi(C_2\cap S)$ contains an open subset. Then I should probably use the fact that $\phi$ induces an isomorphism, but I do no how to do this. AI: Every non-empty open subset of $\mathbb C^n$ is irreducible . Since $\phi$ is a homeomomorphism, $S$ is irreducible too.
H: Rectify the formula Trying to rectify this formula. I'm having problems eliminating the correct symbols. $$F = ∀x[∀y(P(x,y)∪Q(x,z)) ∩∃z∃x∀y(¬P(x,y) ∪ ∀z¬Q(x,z))]$$ AI: $$F = ∀x[∀y(P(x,y)∪Q(x,z))∩∃z∃x∀y(¬P(x,y)∪∀z¬Q(x,z))]$$ Notice that $∃z$ is not needed. $$F = ∀x[∀y(P(x,y)∪Q(x,z))∩∃x1∀y1(¬P(x1,y1) ∪ ∀z1¬Q(x1,z1))]$$
H: How to show that $x^2 \equiv 23 \equiv 5 \pmod 9$ is not solvable? How to show that $x^2 \equiv 23 \equiv 5 \pmod 9$(2) is not solvable? I got that $x^2≡23≡2 (mod 3)$(1) solution would be $x \equiv \pm 2^{\frac{3+1}{4}}\equiv\pm2(mod 3)$ because $3\equiv3 (mod 4)$(3), but because by Euler's criterion $(2/3)\equiv2^{\frac{3-1}{2}}\equiv2\equiv-1(mod 3)$( that is $(2/3)=-1$, so (1) has not solution and so because $x^2 \equiv 23 \equiv 5 \pmod 9 \Leftrightarrow x^2 \equiv 23 \equiv 2 \pmod 3$(5) then also (2) has not solution. Is this correct. I do not know where in wikipedia you get this result(3)( It is Lemma in my lecture notes). And you can't know by using Jacobi symbol $ (\frac{23}{9}) =(\frac{23}{3})^2 =1$. So you cannot by combuting symbol know whether (2) is solvable or not. Can you give some elegant or acceptable solution? And by the way I'm not sure whether (5) is correct analysis and where in wikipedia it can be found( again I find this principle in my lecture notes as Lemma) AI: Hint: if it had a solution mod $\,9\,$ then that would yield a solution mod $\,3,\,$ which do not exist, i.e. $\ 9\mid x^2-23\,\Rightarrow\,3\mid x^2-2,\,$ but $\,{\rm mod}\ 3\!: x^2\in \{0,\pm1\}^2 \equiv \{0,1\} \not\ni 2.$
H: Closed form of $\sum_{j=0}^k {k \choose j} (-1)^{k-j} j^b$ where $k,b$ are positive integers. I am wondering about the closed form (if it exists) of $$\sum_{j=0}^k {k \choose j} (-1)^{k-j} j^b$$ where $k,b$ are positive integers. I know it is equal to $0$ when $b<k$ and the sum is equal to $k!$ when $b=k$. However, I am at a complete loss as to what it equals when $b>k$. Any help is very much appreciated. Thanks in advance! AI: Here is a closed form $$ \sum_{j=0}^{k}(-1)^{k-j}{k \choose j} j^b = k! \left\{\begin{matrix} b \\ k \end{matrix}\right\}, $$ where $ \left\{\begin{matrix} b \\ k \end{matrix}\right\} $ are Stirling numbers of the second kind.
H: Hilbert's 17th Problem - Artin's proof In this expository article, it is mentioned that Emil Artin proved Hilbert's 17th problem in his paper: E. Artin, Uber die Zerlegung definiter Funktionen in Quadrate, Abh. math. Sem. Hamburg 5(1927), 110–115. Not being able to speak German, my question is Does anyone know if English translation of this paper exists somewhere? Or perhaps some link to a book (or article, blog post, etc.) where this proof is given in English? My google searches have been in vain. Note that I am only interested in Artin's proof. (There is a algorithmic proof due to Dellzel, which is in English). Thanks! P.S. I have previously asked for English translation of another paper by E. Artin. AI: I recommend this: The Honors Class: Hilbert's Problems and Their Solvers [Paperback] Ben Yandell It's available on Amazon.
H: Given basis spanning the vector space I am learning Linear algebra nowadays. I had a small doubt and I know it's an easy one. But still I am not able to get it. Recently I came across a statement saying "((1,2),(3,5)) is a basis of $ F^2 $ ". According to the statement a linear combination of the vectors in the list,i.e., $a(1,2)+b(3,5)$ (where a and b belong to F of course) must span the vector space $F^2$. I wanted to know how we can get all the vectors in the vector space using linear combination of these two given vectors here (if the two vectors were ((1,0),(0,1)) which is the standard basis, it would have been fine). But how can we get all the points in the vector space $F^2$ using ((1,2),(3,5))??? suppose I want to get the vector (1,1). I can't think of getting it using linear combination of the given two vectors. Kindly help me with this. I know its an easy one but still could not help and had to ask you guys. Thanks. AI: A basis does not need to have orthogonal vectors in order to allow other vectors in the space to be expressed as linear combinations (they just can't be linearly dependent). To construct a vector in your $ F^2 $ , you would need to solve the equation $ \ \langle x , y \rangle \ = \ a \langle 1 , 2 \rangle + b \langle 3 , 5 \rangle \ , $ which, for your example, is equivalent to solving the set of linear equations $$ a \cdot 1 \ + \ b \cdot 3 \ = \ 1 \ \ \text{and} \ \ a \cdot 2 \ + \ b \cdot 5 \ = \ 1 \ . $$ This gives us $ \ a = -2 , b = 1 , $ and thus $ \ \langle 1,1 \rangle \ = \ (-2) \cdot \langle 1,2 \rangle \ + \ \langle 3,5 \rangle \ . $
H: Why is the operator $2$-norm of a diagonal matrix its largest value? I read this in my textbook have tried working through it - I keep getting max 2-norm(Ax), which is just the magnitude of Ax. How should I do this proof? (note, this is not for homework, I'm just trying to understand why as no proof is provided). AI: From your question, I assume you mean the operator norm with respect to the 2-norm. Let $A = {\rm diag}(\lambda_1, \ldots, \lambda_n)$ a diagonal matrix. We have \begin{align*} \|A\| &= \max_{\|x\|_2 = 1} \|Ax\|_2\\ &= \max_{\|x\|_2 = 1} \left(\sum_{i=1}^n \lambda_i^2x_i^2\right)^{1/2}\\ &\le \max_{\|x\|_2 = 1} \max_i|\lambda_i| \left(\sum_{i=1}^n x_i^2\right)^{1/2}\tag 1\\ &= \max_i|\lambda_i| \cdot \max_{\|x\|_2 = 1} \|x\|_2\\ &= \max_i|\lambda_i| \end{align*} For (1), we can argue as follows: For each $i$: we have $x_i^2 \ge 0$, hence multiplying the inequality $|\lambda_i|^2 \le \max_i |\lambda_i|^2$ by $x_i^2$, we get $x_i^2|\lambda_i|^2 \le (\max_i|\lambda_i|)^2 x_i^2$. Now sum and take the square root of both sides. On the other hand, let $x$ be an eigenvector, corresponding to the largest eigenvalue. Then $$ \|Ax\|_2 = \max_i|\lambda_i| \cdot \|x\|_2 $$ This gives $\|A\| = \max_i |\lambda_i|$.
H: Why is a set a subspace only when its determinant is equal to zero? I am reading in my book that a given set ${(1,2,-1),(0,3,4),(0,2,1)} $ is a subspace of $R^n$ when the determinant does not equal zero of the coresponding table that set creates, like this : \begin{array}{c c c} 1& 2 & -1 \\ 0& 3 & 4 \\ 0& 2 & 1 \\ \end{array} But why is that? What is the intuition behind that? doesn't $R^n$ include like everything ? thank you. AI: The set that you give is not a subspace of $\mathbb{R}^3$. It just consists of three single points (or vectors) if you prefer. However, the linear span of your set, i.e. the set of vectors that can be written as $$ c_1 (1,2,-1) + c_2 (0,3,4) + c_3 (0,2,1) $$ is a subspace (here $c_1$, $c_2$ and $c_3$ are arbitrary real numbers), and this hold regardless of which vectors you take. The determinant can be used as a quick test to see if the three vectors are linearly independent or not. If the determinant is non-zero, the vectors are linearly independent, and they form a basis for the subspace they span. In this case, the subspace is the whole of $\mathbb{R}^3$. If the determinant is zero, the vectors are linearly dependent, and they span a proper subspace of $\mathbb{R}^3$.
H: How To Set The Area Between Two Functions Equal To A Constant? Please pardon the broad nature of this question. Suppose two functions encompass an area between them. What approach might be taken to adjust either function through adding constants to set the area between them equal to a constant? Part of the problem I have encountered is that every time one graph is moved the intercepts change, which makes the process rather time consuming. Is there a way to do this without writing a computer program? AI: It sounds like you have $f(x)$ and $g(x)$, each of which ranges from $x=a$ to $x=b$ and the area between them is what you are talking about. If $f(x) \gt g(x)$, then the area is $\int_a^b (f(x)-g(x)) dx$. If you add a constant $c$ to $g$, the area will decrease by $c(a-b)$. Is this what you are thinking? Added: if you are integrating between the intersection points of $f$ and $g$ you are correct that the area between them will be influenced by changes in the intersection point. If the intersection points are $a$ and $b$, adding a constant $c$ to $g$ will subtract $c(a-b)$ from the integral from $a$ to $b$. Now you need to deal with the shift of the intersection points. Without knowing more about $f$ and $g$ there isn't much to say. If $f$ and $g$ are differentiable, adding a small constant $\epsilon$ to $f$ will shift the lower intersection point to the left by $\frac {\epsilon}{f'(a)-g'(a)}$ so the added area will be about $\frac 12 \frac {\epsilon}{f'(a)-g'(a)}\epsilon$. Similarly at the right end, it will shift the intersection point left by $\frac {\epsilon}{f'(b)-g'(b)}$ but you expect the denominator to be negative here. The added area is similar, $\frac {\epsilon^2}{2(g'(b)-f'(b))}$. The total increase by raising $f$ is then $\epsilon (b-a)+\frac {\epsilon^2}{2(f'(a)-g'(a))}\frac {\epsilon^2}{2(g'(b)-f'(b))}$. If the derivatives are (almost) constant at $a,b$ you can solve the quadratic to find (an approximate) solution. Added again: If you can integrate $f$ and $g$ analytically, you can turn the problem into an algebraic one. Taking the cosine example, we have $\int_{-a}^a \cos x \ dx = 2 \sin a$. To get an area $A$ under one hump of $\cos x - b$, we note that the intersection is $\pm \arccos b$, so we have $\int_{-\arccos b}^{\arccos b} (\cos x - b) dx=2 \sin (\arccos b)-2b\arccos b=2\sqrt {1-b^2}-2b\arccos b$ You need to solve $A=2\sqrt {1-b^2}-2b\arccos b$ for $b$. You won't be able to do this algebraically, but a one dimensional root finder will make quick work of it.
H: Does the functor $S:\mathbf{Top}\to \mathbf{sSets}$ preserve (homotopy)colimits? Does the functor $S:\mathbf{Top}\to \mathbf{sSets}$ given by $S(X)_m=Hom_{\mathbf{Top}}(\Delta^m,X)$ preserve colimits? If not, what is a counterexample? The only things I can say are that it preserves limits because it is a right adjoint and that it preserves filtered colimits since each $\Delta^m$ is a finite CW-complex. Does the functor $S$ preserve homotopy colimits? If not, what is a counterexample? Again I can only say that it preserves homotopy limits as because it is a right Quillen adjoint. This Quillen adjunction with left adjoint the realization functor however is also a Quillen equivalence. I wonder if it follows already from this fact, that the right adjoint preserves homotopy colimits (and that the left adjoint preserves homotopy limits). AI: I find the claim about filtered colimits suspect – that question is very subtle (which is why the small object argument in $\mathbf{Top}$ has to be done carefully). Anyway, $S$ preserves coproducts, because the spaces $\Delta^n$ are all connected. Thus, $S$ preserves colimits of diagrams of discrete topological spaces. (In fact, $S$ restricts to an equivalence between discrete topological spaces and discrete simplicial sets.) However, consider the Sierpiński space $X$, i.e. the space with one open point and one closed point. Take the coequaliser of the two maps $1 \to X$; it is $1$ again, of course. Now, $S(X)_n$ is the set of open subsets of $\Delta^n$, and the coequaliser of the two maps $S(1)_n \to S(X)_n$ simply identifies the element corresponding to $\emptyset$ with the element element corresponding to $\Delta^n$; this, of course, is still an infinite set. So we have the required example of a coequaliser not preserved by $S$. Note, $X$ is compactly generated, so not completely pathological! But if you prefer to work in the category of compactly-generated Hausdorff spaces (say), here is another counterexample. Consider two maps $1 \to \Delta^1 \amalg \Delta^1$ whose coequaliser is $\Delta^1$. Suppose, for a contradiction, that $\mathbf{Top}(\Delta^1, -)$ preserves this coequaliser. Then the induced map $\mathbf{Top}(\Delta^1, \Delta^1 \amalg \Delta^1) \to \mathbf{Top}(\Delta^1, \Delta^1)$ must be surjective, but that is false: since $\Delta^1$ is connected, its image in $\Delta^1 \amalg \Delta^1$ must be entirely contained in one of the two copies, so there is no map $\Delta^1 \to \Delta^1 \amalg \Delta^1$ whose composite with the coequaliser map $\Delta^1 \amalg \Delta^1$ is the identity. As for homotopy colimits: since $S$ is a homotopical equivalence (i.e. preserves weak equivalences and induces an equivalence of homotopy categories), it indeed preserves all homotopy colimits. Here I mean homotopy colimits in the sense of left derived functors.
H: Simple question about circumference of circle Q: The physical education teacher asked to one classroom, by vote, choose a sport between volleyball, basketball and football, to practice in class the following week. pie chart: The segment AB, which is the diameter of the pie chart, measuring 12cm, and the length of the smaller arc $\stackrel \frown{AC}$ is $\frac{3\pi}{2}$ cm. Updated/ADDED: Knowing that the sport basketball received 4 votes, answer: a) which was the sport most votes? how many students chose this sport? b) how many students participated in the voting? c) How many students voted in football? the most important for me is understand how to solve this problem and any other question similar to this. AI: To answer some of your questions, we need to know how many students voted. However, you have all the information to determine what percentage of students who voted, voted for each sport. A circle with diameter $12$ cm has a circumference of $C = 12 \pi$. $\;\stackrel \frown{AB} = 6 \pi$, or $\dfrac 12\;$ the circumference of the circle. What fraction of $12\pi$ is $\stackrel \frown{AC} = \frac{3\pi}{2}$? $$\dfrac{3\pi/2}{12\pi} = \frac{3}{24} = \frac 18$$ What must the length of $\stackrel \frown{BC}$ given that $6 \pi + 3\pi/2 + \stackrel \frown{BC} = 12\pi$? We must have that the length of $\stackrel\frown{BC} = 9\pi/2 cm$. And $\dfrac{9 \pi/2}{12 \pi} = \dfrac 9{24} = \dfrac 38$ of the circumference of the entire circle. We can't know, given the information provided, how many students voted. At any rate, of those that did vote, you can determine the percentage or fraction of that total that voted for each sport, respectively. Now, you have that $\frac 12$ or $50\%$ of all students voted to play volleyball, $\frac 18$ or $12.5\%$ of all students voted to play basketball, and $\frac 38$ or $37.5\%$ of the students voted to play football. Edit/Update: Okay, so now we know that basketball received $4$ votes. This is sufficient to compute the number of students who voted, and the number of students who voted for any given sport. So we'll let $X$ be the number of students voting. Then $\dfrac 1 8\times X = 4 \implies X = 32$ students voted. \begin{align} \hline \\ \text{Students voting for basketball}: \dfrac 18 \times 32 & = \;\;4 \\ \text{Students voting for football}: \dfrac 38 \times 32 & = 12 \\ \text{Students voting for volleyball}: \dfrac 12 \times 32 &= 16 \\ \hline \\ \text{Total number of students voting}:\quad\qquad\; & \phantom{ = } 32 \\ \end{align}
H: Derivative of absolute value over the complex numbers Given the function $f: \mathbb{C} \rightarrow \mathbb{C}, z \mapsto z\bar{z}$. I am supposed to determine where f is differentiable and where it is holomorphic. So I tested the Cauchy Riemann Differential equations and found out that for $u(x,y)=x^2+y^2,v(x,y)=0$ we have $ D_xu(x,y)=2x=D_yv(x,y) \Leftrightarrow x=0$ and $ D_yu(x,y)=2x=-D_xv(x,y) \Leftrightarrow y=0$. So $(0,0)$ is the only candidate where it might be differentiable and since the partial derivatives entering the CR differential equations are continous, f is actually differentiable there. Is this correct? So f appears to be nowhere holomorphic. AI: You are correct. The function is (complex-) differentiable only at $z=0$ and nowhere holomorphic. You can check the differentiability at $z=0$ directly by computing $$\lim_{h\to 0} \frac{f(h)-f(0)}{h} = \lim_{h\to 0} \frac{h\bar h}{h} = \lim_{h\to 0} \bar h = 0.$$ (Note that Cauchy-Riemann's equations can be satsified at a single point, even if the function is not differentiable there, but since $u$ and $v$ are $C^1$, this is not a problem here.)
H: Can you solve current version of Quadratic reciprocity from Gauss version? Can you solve current version( after: "He notes that these can be combined") of Quadratic reciprocity from Gauss version? I have tried it but I have problem to understand especially if $ q \equiv 3 \pmod 4 \Rightarrow (\frac{p}{q}) =1 \Leftrightarrow (\frac{-p}{q}) =1$. I understand that if $ q \equiv 3 \pmod 4$(2) then by using Euler's criterion you get $(\frac{p}{q}) =1$, but don't know how you get $(\frac{q}{p}) =1$(1). But is this(1) goal if you assume (2)? AI: Gauss' version: (1) Suppose $q\equiv 1\pmod{4}$. Then $x^2\equiv p\pmod{q}$ is solvable if and only if $x^2\equiv q\pmod{p}$ is solvable. Using Legendre symbols, this means exactly $\left(\frac{p}{q}\right)=\left(\frac{q}{p}\right)$. (2) Suppose $q\equiv 3\pmod{4}$. Then Then $x^2\equiv p\pmod{q}$ is solvable if and only if $x^2\equiv -q\pmod{p}$ is solvable. Using Legendre symbols, this means exactly $\left(\frac{p}{q}\right)=\left(\frac{-q}{p}\right)$. We may write $\left(\frac{-q}{p}\right)=\left(\frac{-1}{p}\right)\left(\frac{q}{p}\right)$. But by the first supplement to QR, $\left(\frac{-1}{p}\right)=(-1)^{\frac{p-1}{2}}$. Hence, if $p\equiv 1\pmod{4}$, then $\left(\frac{p}{q}\right)=\left(\frac{q}{p}\right)$ or $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=1=(-1)^{\frac{(p-1)(q-1)}{4}}$. If instead $p\equiv 3\pmod{4}$, then $\left(\frac{p}{q}\right)=-\left(\frac{q}{p}\right)$ or $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=-1=(-1)^{\frac{(p-1)(q-1)}{4}}$.
H: Let $A$ and $B$ be $n\times n$ matrices. Suppose $A$ is similar to $B$. Prove trace($A$) = trace($B$). Let $A$ and $B$ be $n\times n$ matrices. Suppose $A$ is similar to $B$. Prove $\operatorname{trace}(A) = \operatorname{trace}(B)$. I'm not sure where to go on this. So far I have this: If $A$ is similar to $B$, then $B=P^{-1}AP$ and $A=PBP^{-1}$ This implies that: $\operatorname{trace}(B) = \operatorname{trace}(P^{-1}AP)$ and $\operatorname{trace}(A) = \operatorname{trace}(PBP^{-1})$ Not sure where to go from here AI: Because $A$ and $B$ are similar, we know that $B=P^{-1}AP$. Thus $$tr(B)=tr(P^{-1}AP)=tr(P^{-1}(AP))=tr((AP)P^{-1})=tr(A(PP^{-1}))=tr(AI)=tr(A)$$
H: Partial Sums of partial sums of a converging series. I've been studying for an analysis qual and this problem was on one of the past exams. Let $\{a_{j}\}$ $\subseteq$ $\mathbb{R}$ such that $\sum_{j=1}^{\infty}a_{j}=\frac{3\pi}{4}$. For every $n\in \mathbb{N}$, define $T_{n}=\frac{1}{n}\sum_{j=1}^{n}S_{j}$ where for each $j \in \mathbb{N}$, $S_{j}=\sum_{k=1}^{j} a_{k}$. Does $\{ T_{n} \}_{n=1}^{\infty}$ converge? If so, find its sum. I have been able to show that |$T_{n}$| is bounded by $\frac{3\pi}{4}$, but I have a sneaking suspicion that it does indeed converge to $\frac{3\pi}{4}$ but have been unable to show it. Can I get some help with that? thanks. AI: For large enough $j$, $S_j$ is within $\epsilon$ of $\frac{3\pi}{4}$. Make this rigorous by quantifying what "large enough" means. Then $T_n$ is an average of a "handful" of terms that are not necessarily close to $\frac{3\pi}{4}$ and an unbounded finite number of terms that are within $\epsilon$ of $\frac{3\pi}{4}$. Again, make this rigorous (especially for an analysis qual!), use inequalities, group like terms together and you should have something like $$\left|T_n-\frac{\overbrace{\left(\frac{3\pi}{4}+\cdots+\frac{3\pi}{4}\right)}^{n\text{ copies}}}{n}\right|\leq\frac{\text{bounded sum}+(n-n_{\epsilon})\epsilon}{n}\to\epsilon$$
H: Show that $N_n \mid N_m$ if and only if $n \mid m$ Let $N_n$ be an integer formed of $n$ consecutive $1$s. For example $N_3 = 111,$ $N_7 = 1 111 111.$ Show that $N_n \mid N_m$ if and only if $n \mid m.$ AI: Note that $$N_n=\frac{10^n-1}9$$ Bill Dubuque once proved the following Let $\{f_n\}$ be a sequence of integers with $f_0=0$ such that, when $m>n$; we have $$f_n\equiv f_{n-m}\mod f_m$$ Then $(f_n,f_m)=f_{(n,m)}$ Apply this to any sequence of the form $$\hat a_n=\frac{a^n-1}{a-1}$$ to prove the claim $\hat a_n\mid \hat a_m\iff n\mid m$. In your case $a=10$.
H: Let A and B be n by n matrices. Suppose A is invertible. Show that AB is similar to BA. Let $A$ and $B$ be $n\times n$ matrices. Suppose $A$ is invertible. Show that $AB$ is similar to BA. I started with $BA = P^{-1}(AB)P$, but I'm drawing a blank now. AI: Since $AB=BA$, and A is invertible, then we have ${A^{-1}}ABA=BA$, there exist $P=A$, such that ${P^{-1}}ABP=BA$.
H: Laplace-Beltrami on a sphere I'm trying to compute the Laplace-Beltrami of the function $u(r,\varphi,\theta) = 12\sin(3\varphi)\sin^3(\theta)$ on a unit sphere. Note that $\varphi$ is the azimuth, i.e. $\varphi \in [0,2\pi]$ and $\theta$ the inclination, i.e. $\theta \in [-\frac{\pi}{2},\frac{\pi}{2}]$. For instructive purposes, I'd like to do this step by step. The Laplace-Beltrami of $u$ is defined as $$\Delta u := \mathrm{div} (\mathrm{grad} \; u).$$ Since we're talking about a surface (the sphere), I assume that we should use the surface gradient of $u$, defined as $$\nabla_S u := \nabla u - \vec{n}(\vec{n} \cdot \nabla u).$$ The gradient operator in spherical coordinates is defined as $$\nabla := \frac{\partial }{\partial r} \vec{e_r} + \frac{1}{r} \frac{\partial }{\partial \theta} \vec{e_\theta} + \frac{1}{r\;\sin(\theta)} \frac{\partial }{\partial \varphi} \vec{e_\varphi},$$ which results in $$\nabla u = 0 \vec{e_r} + \frac{1}{r} 36 \sin(3\varphi) \sin^2(\theta) \cos(\theta) \vec{e_\theta} + \frac{1}{r} 36 \cos(3\varphi) \sin^2(\theta) \vec{e_\varphi}.$$ Now, I'm not quite sure about the unit normal $\vec{n}$ on the sphere. I thought it would just be $\vec{e_r}$, but that cannot be right since in that case the inner product $\vec{n} \cdot \nabla u$ is zero (and hence the surface gradient would be equal to the regular gradient). Just to be sure, the inner product for a spherical coordinate setting is defined as $a \cdot b = g_{ij} a^i b^j$ — using Einstein notation — with the metric $g$ defined as $$\left(\begin{array}{ccc}1 & 0 & 0 \\ 0 & r^2 & 0\\ 0 & 0 & r^2 \sin^2(\theta) \end{array}\right),$$ correct? Since the spherical coordinate system is right-handed, taking the cross product of the tangent vectors $\vec{e_\varphi}$ and $\vec{e_\theta}$ again results in $\vec{e_r}$. Could someone point out where I'm going wrong? Next up is the divergence. I assume there is something like the surface divergence, but I couldn't find much about it (any references are most welcome). This would result in $\Delta_S u = \mathrm{div}_S (\nabla_S u)$. It would be great if somebody could help to complete this elaboration. The eventual result of $\Delta_S u$ should be $-12 u$. [Edit] Using the regular divergence operator for a spherical coordinate setting, defined as $$\nabla \cdot := \frac{1}{r^2} \frac{\partial}{\partial r} r^2 \vec{e_r} + \frac{1}{r \sin(\theta)} \frac{\partial}{\partial \theta} \sin(\theta) \vec{e_\theta} + \frac{1}{r \sin(\theta)} \frac{\partial}{\partial \varphi} \vec{e_\varphi},$$ we get (without using the metric $g$ as defined above) $$\frac{1}{r^2 \sin(\theta)} \left( 36 \sin(3\varphi) \left\{ 3 \sin^2(\theta) \cos^2(\theta) - \sin^4(\theta) \right\} \right) + \frac{1}{r^2 \sin(\theta)} \left( -108 \sin(3\varphi) \sin^2(\theta) \right).$$ In case the metric should be used (I'm not sure about this), the result is $$\left( 36 \sin(3\varphi) \left\{ 3 \sin(\theta) \cos^2(\theta) - \sin^3(\theta) \right\} \right) + \left( -108 \sin(3\varphi) \sin^3(\theta) \right).$$ Since the solution should be $-12u = -144 \sin(3\varphi) \sin^3(\theta)$, I'm not sure how I should get rid of the term $108 \sin(3 \varphi) \sin(\theta) \cos^2(\theta)$. Anyone? AI: Your function is independent of $r$, so the gradient always lies on the surface of the sphere, so in this case the surface gradient is the normal gradient. For the divergence, try using divergence in spherical coordinates:http://en.wikipedia.org/wiki/Divergence#Spherical_coordinates The surface divergence for a general vector field on a differentiate manifold is discussed in http://en.wikipedia.org/wiki/Laplace–Beltrami_operator
H: Proof of Zassenhaus Lemma Lemma 4.52 from Advanced Modern Algebra by Rotman: Given four subgroups $A \triangleleft A^*$ and $B \triangleleft B^*$ of a group G, then $A(A^* \cap B) \triangleleft A(A^* \cap B^*)$, $B(B^* \cap A) \triangleleft B(B^* \cap A*)$, and there is an isomorphism $$\frac{A(A^* \cap B^*)}{A(A^* \cap B)} \cong \frac{B(B^* \cap A^*)}{B(B^* \cap A)}$$ Proof We claim that $*(A \cap B^*) \triangleleft (A^* \cap B^*)$: that is, if $c \in A \cap B^*$ and $x \in A^* \cap B^*$, then $xcx^{-1} \in A \cap B^*$. Now $xcx^{-1} \in A$ because $c \in A$, $x \in A^*$, and $A \triangleleft A^*$; but also $xcx^{-1} \in B^*$, because $c,x \in B^*$. Hence, $(A \cap B^*) \triangleleft (A^* \cap B^*)$; similarly, $(A^* \cap B) \triangleleft (A^* \cap B^*)$. Therefore, the subset D, defined by $D = (A \cap B^*)(A^* \cap B)$, is a normal subgroup of $A^* \cap B^*$, because it is generated by two normal subgroups. Using the symmetry in the remark, it suffices to show that there is an isomorphism $$\frac{A(A^* \cap B^*)}{A(A^* \cap B)} \rightarrow \frac{A^* \cap B^*}{D}.$$ Define $\phi: A(A^* \cap B^*) \rightarrow (A^* \cap B^*)/D$ by $\phi: ax \rightarrow xD$, where $a \in A$ and $x \in A^* \cap B^*$. Now $\phi$ is well-defined: if ax=a'x', where $ax=a'x'$, where $a' \in A$ and $x' \in A^* \cap B^*$, then $(a')^{-1}a = x'x^{-1} \in A \cap (A^* \cap B^*) = A \cap B^* \subseteq D$. Also, $\phi$ is a homomorphism: $axa'x' = a''xx'$, where $a''=a(xa'x^{-1}) \in A$ (because $A \triangleleft A^*)$, and so $\phi(axa'x') = \phi(a''xx') = xx'D = \phi(ax)\phi(a'x').$ It is routine to check that $\phi$ is surjective and that $ker \phi = A(A^* \cap B)$. The First Isomorphism Thoerem completes the proof. I have some problems with seeing that the kernel is $A(A^* \cap B)$. I understand that any element that belongs to $A(A^* \cap B)$ is of course mapped to the identity. But how do we know that these are the only elements in the kernel? Because we also know that $(A \cap B^*) \subseteq *A^* \cap B^*$, and any element that belongs to $A(A \cap B^*)$ must also be mapped to the identity, right? Thanks in advance AI: Suppose $ax$ is in the kernel of $\phi$. This means that $x\in D=(A\cap B^*)(A^*\cap B)$. Hence, we can write $x=a'x'$ where $a'\in A\cap B^*\subset A$ and $x'\in A^*\cap B$. It follows that $ax=aa'x'\in A(A^*\cap B)$.
H: Subspaces and Dimension of sum of two subspaces. A simple question.i am in the initial phase of learning linear algebra. Need your help. I have made it through my own understanding. Wanted to know if i am thinking right. Lets take the subspace {(a,b,c,d,e,0,0,0,0,0,0) $ \forall $ a,b,c,d,e $ \in $ R} . What i wanted to know is whether we can regard this subspace as the vector space $ R^5 $ ??? as we can get any vector in $ R^5 $ using this subspace as all the other coordinates in the remaining dimensions are "0"...so, we can say that : {(a,b,c,d,e,0,0,0,0,0,0) $ \forall $ a,b,c,d,e $ \in $ R} is a subspace of $ R^{11} $ or it is the vector space $ R^5 $ ....we can say either way right?? as for another example lets take the subspace {(a,b,1) $ \forall $ a,b $ \in $ R}. Here we can say that it is subspace of $ R^3 $ but not the vector space $ R^2 $ as the third coordinate is "1" here and not "0"..had it been "0" we could have said it is subspace of $ R^3 $ or it is the vector space $R^2$..am i thinking it the right way??? my second doubt is : we know dim(U1+U2) = dim(U1)+dim(U2)-dim(U1 $\cap$ U2) (here dim denotes dimension) i have gone through the proof and its fine.What i also thought of it is as follows : Let U1 = {(a,b,c,0,0,0) $\forall$ a,b,c $\in$ R} let U2 = {(0,0,d,e,f,g) $\forall$ d,e,f,g $\in$ R} Now,when we will take the dimension of U1+U2, we will get the 3rd coordinate of the vector space (U1+U2) as common from both the subspaces, thus all we have to do is write the dimension as : dim(U1+U2) = 3+4-1 = 6. I guess i am right on this one.If not, kindly point out my mistake. Besides, as i mentioned in my first doubt, can we write here U1 is the subspace of $R^6$ or it is the vector space $R^3$?? Similarly U2 is the subspace of $R^6$ or it is the vector space $R^4$??? In case of U2, the coordinates are any real number at 3rd,4th,5th and 6th place..if i am thinking right in my first doubt,can we here also say that U2 is the vector space $R^4$?? coz what i think where the coordinates are real (other than "0" ofcourse) the place where the coordinates are placed should not bother us ... Thanks for your patience...one final question.. If whatever i have said upto now is right, then lets take the subspace : {(0,a,0,0,b,0,0,0,c) $\forall$ a,b,c $\in$ R} My question is the same.Can we say it is a subspace of $R^9$ or that it is the vector space $R^3$ ??? So, these are the things i wanna clear out. i know its simple ones. I have just started learning linear algebra. and the way i am thinking is not explained in the book. So, needed help from you guys.Thanks for taking time to read and Thanks for any help in advance. AI: You've done the sum dimension just right. As for $U_2$, for example, you can easily say that $U_2$ is isomorphic to (roughly, "behaves the same as") $\Bbb R^4$. You should be cautious, though, about saying that it is $\Bbb R^4$, since there are many subspaces of $\Bbb R^6$ isomorphic to $\Bbb R^4$. Only one thing is well and truly incorrect: $\{(a,b,1):a,b\in\Bbb R\}$ is not a linear subspace of $\Bbb R^3,$ as (for one thing) it isn't closed under vector addition.
H: help understanding step in derivation of correlation coefficient I'm looking to understand the starred step in the derivation below (also, if someone could help with the LaTex alignment, I'd appreciate it). The regression line is $y= b_0 + b_1 x$, where $b_0$ and $b_1$ can be found by: 1) taking the difference between each observed value $y_i$ and the expected point regression line, $b_0 + b_1 x_i$ $$\text{ difference } = y_i - b_0 -b_1 x_i$$ 2) summing the square of the differences from 1) to get the sum of squares $$SS = \sum \limits_{i=1}^n (y_i - b_0 -b_1 x_i)^2$$ 3) taking the partial derivative with respect to $b_0$ and $b_1$, then solving for each $$ \begin{align} \text{ solving for } b_0 \\ SS &= \sum(y_i - b_0 -b_1 x_i)^2\\ SS &= \sum (Y_i ^2 - 2Y_i b_0 - 2 Y_i b_1+ 2b_0 b_1X_i + b_1^2X_i^2+b_0^2) &\text{expand the square}\\ \frac{ \partial }{\partial_{b_0} }SS &= \sum (-2Y_i + 2b_1 X_i + 2b_0) &\text{partial derivative wrt} b_0\\ 0 &= \sum 2(-Y_i + b_1 X_i + b_0) &\text{factor out 2 from the sum}\\ 0 &= \sum (-Y_i + b_1 X_i + b_0) &\text{divide both sides by 2}\\ 0 &= \sum -Y_i + \sum b_1 X_i + \sum b_0 &\text{split summation into parts}\\ \sum Y_i &= \sum b_1 X_i + \sum b_0 \\ \sum Y_i &= b_1 \sum X_i + n b_0 \\ \frac{1}{n}(\sum Y_i - b_1 \sum X_i ) &= b_0 \\ \bar Y - b_1 \bar X &= b_0 \text { rewrite sums as averages since } \frac{1}{n} \sum Y_i = \bar Y\\ \end{align} $$ $$ \begin{align} \\ \text{solving for } b_1\\ \frac{ \partial }{\partial_{b_1} }SS &= \sum (-2Y_iX_i + 2b_0 X_i + 2 b_1 X_i^2) &\text{ partial derivative wrt } b_1\\ 0 &= 2\sum (-Y_iX_i + b_0 X_i + b_1 X_i^2) \\ 0 &= \sum (-Y_iX_i + b_0 X_i + b_1 X_i^2) \\ 0 &= -\sum Y_iX_i + b_0 \sum X_i + b_1 \sum X_i^2 &\text{ split summation}\\ 0 &= -\sum Y_iX_i + (\bar Y - b_1 \bar X) \sum X_i + b_1 \sum X_i^2 &\text{ substitue } b_0\\ 0 &= -\sum Y_iX_i + (\bar Y \sum X_i - b_1 \bar X \sum X_i) + b_1 \sum X_i^2 &\text{ distribute sum}\\ b_1 \bar X \sum X_i - b_1 \sum X_i^2 &= -\sum Y_iX_i + \bar Y \sum X_i &\text{ collect } b_1 \text{ terms}\\ b_1 (\bar X \sum X_i - \sum X_i^2) &= -\sum Y_iX_i + \bar Y \sum X_i \\ b_1 &= { \bar Y \sum X_i -\sum Y_iX_i \over (\bar X \sum X_i - \sum X_i^2) }\\ b_1 &= { \frac{1}{n} \sum Y_i \sum X_i -\sum Y_iX_i \over (\frac{1}{n} \sum X_i \sum X_i - \sum X_i^2) } \biggr ( \frac{-n}{-n} \biggr )\\ b_1 &= { n \sum Y_iX_i - \sum Y_i \sum X_i \over n \sum X_i^2 -(\sum X_i)^2 } \\ \end{align} $$ $$ \begin{align} b_0 &= \frac{1}{n} \sum y_i - b_1 \frac{1}{n} \sum x_i\\\\\\ b_1 &= {n \sum x_i y_i - \sum x_i \sum y_i \over n \sum x_i^2-(\sum x_i)^2} \end{align} $$ (derivation shown in http://polisci.msu.edu/jacoby/icpsr/regress3/lectures/week2/5.LeastSquares.pdf) From this point you can use $b_1$ to get the correlation coefficient as follows: $$ \begin{align} b_1 &= {\frac{1}{n} \sum x_i y_i - (\frac{1}{n}\sum x_i) (\frac{1}{n} \sum y_i ) \over (\frac{1}{n} \sum x_i^2) -(\frac{1}{n}\sum x_i)^2} & \text{ divide top and bottom by } n^2 \\\\ b_1 &= {\frac{1}{n} \sum x_i y_i - (\bar x) (\bar y ) \over (\frac{1}{n} \sum x_i^2) -(\bar x)^2} & \text{ rewrite product of sums as averages } \\\\ b_1 &= {\frac{1}{n} \sum (x_i - \bar x)(y_i - \bar y ) \over \frac{1}{n} \sum (x_i - \bar x)^2} & \color{red} *\text{application of inscrutably arcane magic} \\\\ b_1 &= { \sum (x_i - \bar x)(y_i - \bar y ) \over \sqrt{\sum (x_i - \bar x)^2} \sqrt{\sum (x_i - \bar x)^2} } & \text{cancel } \frac{1}{n}\text{, factor denominator }\\\\ b_1 &= { \sum (x_i - \bar x)(y_i - \bar y ) \over \sqrt{\sum (x_i - \bar x)^2} \sqrt{\sum ( x_i - \bar x)^2} } \biggr({\sqrt{\sum(y_i - \bar y)^2} \over \sqrt{\sum(y_i - \bar y)^2}}\biggr) & \text{multiply by 1 } \\\\ b_1 &= { \sum (x_i - \bar x)(y_i - \bar y ) \over \sqrt{\sum (x_i - \bar x)^2} \sqrt{\sum(y_i - \bar y)^2}} \biggr({\sqrt{\sum(y_i - \bar y)^2} \over \sqrt{\sum ( x_i - \bar x)^2} }\biggr) & \text{re-arrange } \\\\ b_1 &= R \frac{S_x}{S_y} \end {align} $$ AI: For the numerator, observe that: $$ \begin{align} \frac{1}{n} \left( \sum_{i=1}^n x_iy_i \right) - \bar x \bar y &= \frac{1}{n} \left( \sum_{i=1}^n x_iy_i \right) - \dfrac{n}{n}\bar x \bar y & \text{common denominator}\\ &= \frac{1}{n} \left( \sum_{i=1}^n x_iy_i - n\bar x \bar y \right) & \text{factor out }1/n\\ &= \frac{1}{n} \left( \sum_{i=1}^n x_iy_i - n\bar x \bar y - n\bar x \bar y + n\bar x \bar y \right) & \text{add $0$ in a fancy way }\\ &= \frac{1}{n} \left( \sum_{i=1}^n x_iy_i - \bar x(n\bar y) - \bar y(n\bar x) + n(\bar x \bar y) \right) & \text{rearrange }\\ &= \frac{1}{n} \left( \sum_{i=1}^n x_iy_i - \bar x\sum_{i=1}^ny_i - \bar y \sum_{i=1}^n x_i + \sum_{i=1}^n\bar x \bar y \right) & \text{change back to sigmas}\\ &= \frac{1}{n} \sum_{i=1}^n (x_iy_i - \bar xy_i - \bar y x_i + \bar x \bar y) & \text{combine sigmas}\\ &= \frac{1}{n} \sum_{i=1}^n (x_i- \bar x)(y_i - \bar y) & \text{factor}\\ \end {align} $$ As for the denominator: $$ \begin{align} \left(\frac{1}{n} \sum_{i=1}^n x_i^2 \right) - (\bar x)^2 &= \left(\frac{1}{n} \sum_{i=1}^n x_i^2 \right) - \left(\dfrac{n}{n}(\bar x)^2 \right) & \text{common denominator}\\ &= \dfrac{1}{n}\left(\sum_{i=1}^n x_i^2 - n(\bar x)^2 \right) & \text{factor out $1/n$}\\ &= \dfrac{1}{n} \left( \sum_{i=1}^n x_i^2 - 2n(\bar x)^2 + n(\bar x)^2 \right)& \text{add $0$ in a fancy way}\\ &= \dfrac{1}{n} \left( \sum_{i=1}^n x_i^2 - 2\bar x(n\bar x) + n(\bar x)^2 \right)& \text{rearrange}\\ &= \dfrac{1}{n} \left( \sum_{i=1}^n x_i^2 - 2\bar x \sum_{i=1}^nx_i + \sum_{i=1}^n(\bar x)^2 \right)& \text{change back to sigmas}\\ &= \dfrac{1}{n} \sum_{i=1}^n \left( x_i^2 - 2\bar xx_i + (\bar x)^2 \right)& \text{combine sigmas}\\ &= \dfrac{1}{n} \sum_{i=1}^n ( x_i - \bar x)^2 & \text{factor}\\ \end {align} $$
H: If $a, b$ are positive integers, does $\;b\mid(a^2 + 1)\implies b\mid (a^4 + 1)\quad?$ If $a, b$ are positive integers, does $\;b\mid(a^2 + 1)\implies b\mid (a^4 + 1)\;$? Explain if this is true or not. If no, give a counterexample. AI: Suppose $b = 5, a = 3$: $$5\mid (3^2 + 1)\;\;\text{but}\;\;5 \not\mid (3^4 + 1)$$ Or, simply choose $a = 2, b= 5$ and again, $$5\mid (2^2 + 1)\;\;\text{but}\;\;5 \not\mid (2^4 + 1)$$ What is true is that $b\mid (a^2 + 1) \implies b\mid(a^2 + 1)^2$, but note that $$\begin{align}\;(a^2 + 1)^2 &= (a^4 + 2a^2 + 1) \\ \\ &= \left[(a^4 + 1) + 2a^2\right]\\ \\ &\neq (a^4 + 1)\end{align}$$
H: Why does this summation of ones give this answer? I saw this in a book and I don't understand it. Suppose we have nonnegative integers $0 = k_0<k_1<...<k_m$ - why is it that $$\sum\limits_{j=k_i+1}^{k_{i+1}}1=k_{i+1}-k_i?$$ AI: In general, $$\sum_{k=m+1}^n 1=n-m$$ This is because $$\sum_{k=1}^m1=m$$ $$\sum_{k=1}^n 1=n$$ and $$\sum_{k=m+1}^n 1=\sum_{k=1}^n1-\sum_{k=1}^m1=n-m$$
H: find the complex number that satisfies the following conditions Find all values of $z \in \Bbb C$ such that: $z + \bar{z} = 18$ and $z.\bar{z} = 84$. I don't know how to get that values, someone can help me to solve this? AI: Hint: (Please check first that you have written the question correctly, because $z+z$ looks odd) Any complex number can be written as $z=a+i b$ where $a,b$ are real numbers. Substitute into the given conditions,...., and solve for $a,b$. Edit: Now your question makes sense. Try this: $$z.\bar{z} =84$$ $$\Rightarrow z.(18-z) =84$$ $$z^2-18z+84=0$$ This is a quadratic equation. So....
H: Angle preserving linear maps In Spivak's Calculus On Manifolds, in part (c) of question 1-8, he asks the following question: What are all angle preserving $T:\mathbf{R}^n \to \mathbf{R}^n$? I already showed that if $T$ is diagonalizable with a basis $\{x_1,\ldots,x_n\}$ where $Tx_i = \lambda_i$, then $T$ is angle preserving $\iff$ $|\lambda_i| = |\lambda_j|$ for all $i,j$ (this was part (b)). Perhaps this can be used? Thanks. AI: Take any orthonormal basis $\mathbf e_1,\cdots,\mathbf e_n$. These are mapped to a set $T \mathbf e_j = \mathbf f_j$ which are also orthogonal (and assumed non-zero). Hence in our orthonormal basis, $T$ is a matrix with columns $\mathbf f_j$ which are orthogonal. $D=T^\dagger T$ is a diagonal matrix with entry $(jj)$ given by $\lVert \mathbf f_j\rVert^2$. Further, considering the angle between $\mathbf e_1$ and $\mathbf e_1+\mathbf e_k$ gives $$\cos \theta = 1/\sqrt 2 = \frac{\mathbf e_1^\dagger D (\mathbf e_1+\mathbf e_k)}{\sqrt{\mathbf e_1^\dagger D \mathbf e_1\times (\mathbf e_1+\mathbf e_k)^\dagger D (\mathbf e_1+\mathbf e_k)}} = \frac{\lVert \mathbf f_1\rVert^2}{\lVert \mathbf f_1\rVert\sqrt{\lVert \mathbf f_1\rVert^2+\lVert \mathbf f_k\rVert^2}}$$ and hence $$\lVert \mathbf f_k\rVert = \lVert \mathbf f_1\rVert$$ Therefore, $D=\lambda I$ is a positive multiple of the identity matrix. Accordingly, $\lambda^{-1/2}T$ is an orthogonal matrix. Thus since clearly such matrices always preserve angles, we are done. The answer is: non-zero multiples of orthogonal matrices. Note: Whilst $T$ need not be diagonalizable, $T^\dagger T$ always is. (It's a real, symmetric matrix.)
H: Basics of Infinitary Formal Languages Reading through Hodges' "A Shorter Model Theory", he gives the following symbolism (pgs. 23-25) for the first-order language constructed in the normal way with only finitely many formulas conjoined/disjoined together and only finitely many quantifiers in a row: $L_{\omega \omega}$. The first subscript is, from what I can gather, meant as a strict upperbound for conjunction/disjunction, while the second is meant as a strict upperbound on quantification (I know that doesn't quite make sense--hence the question). The notation is also meant to generalize in the obvious way to $L_{\alpha \beta}$. Why is Hodges using the ordinal $\omega$ rather than, say, the cardinal $\aleph_0$? Is the ordering he has in mind simply from left to right? If so, how can this be meaningful given that, for example, his disjunction/conjunction operations are over sets of formulas (and thus seem to be without an ordering)? AI: As far as I know, the only reasons for writing $\omega$ rather than $\aleph_0$ in this context are (1) tradition and (2) avoiding second-level subscripts (for a while). In particular, if conjunctions, disjunctions, and quantifier blocks are given as sets rather than sequences, then the $\omega$ here and the $\alpha$ and $\beta$ in similar contexts should be understood as strict upper bounds for cardinalities. No orderings should be involved.
H: The series expansion of $\frac{1}{\sqrt{e^{x}-1}}$ at $x=0$ The function $ \displaystyle\frac{1}{\sqrt{e^{x}-1}}$ doesn't have a Laurent series expansion at $x=0$. But according to Wolfram Alpha, it does have a series expansion with terms raised to non-integer powers: $$ \frac{1}{\sqrt{e^{x}-1}} = \frac{1}{\sqrt{x}}- \frac{\sqrt{x}}{4} + \cdots$$ How is that series derived? My initial thought was to use the general binomial theorem, but I'm not sure how. AI: Related problems: (I), (II). Just find the Taylor series of the function $$ \frac{\sqrt{x}}{\sqrt{e^{x}-1}}. $$ Added: Here is a formula for the $n$th derivative of the function $$\left(\frac{\sqrt{x}}{\sqrt{e^{x}-1}}\right)^{(n)}= \frac{\pi}{2}\sum _{k=0}^{n} \sum _{i=0}^{k} \sum _{m=0}^{i}{ \frac { \binom {k}{i}\left[\matrix{i\\m}\right] \left\{\matrix{n\\k}\right\} x^{\frac{1}{2}-m}\, {\rm e}^{(k-i) x} \left( {\rm e}^x - 1 \right)^{i-k-\frac{1}{2}} }{\Gamma \left( \frac{1}{ 2}-k+i \right) \Gamma \left( \frac{3}{2}-m \right) }} .$$ Note: I'll appreciate it if someone can verify this formula with Maple or Mathematica.
H: Angle between different rays (3d line segments) and computing their angular relationships I have several positions (say A,B,C,..) and I know their coordinates (3d). From each point, if a certain ray is passing in a way to converge them at a given (known) point (say O). This point O is lie on a known planar surface (dark plane). . Now, my problem is to know, mutual locations and inter-relationship between each point with the plane. For example, I want to know that ray DO come first and then BO, AO, ..etc. Once I know the mutual location of them, then I want to know angle between each adjacent rays as it is my main intention. According to example case, I need to know DOB, BOA, AOC, COE, EOF. Additionally, I need to know angle between first ray and plane and as well as last ray and plane (as these DO & FO are next to plane). (I want to know only adjacent angles and not the all possible angles, therefore, the correct adjacency or point order is needed to avoid unnecessary computations). I guess, you can understand my question. NOTE: A,B,C,...,O these points are not lie on a single plane. n is normal vector of the plane Please, show me a way to do this without computing unnecessary computations as I want to implement this in c++. AI: You have to use scalar product: $$\vec u\cdot\vec v=|\vec u|\cdot|\vec v|\cdot\cos\angle(\vec u,\vec v)$$ So, the angle can be obtained by: $$\cos\angle(\vec u,\vec v)= \frac{u_1v_1+u_2v_2+u_3v_3}{\sqrt{{u_1}^2+{u_2}^2+{u_3}^2}\cdot \sqrt{{v_1}^2+{v_2}^2+{v_3}^2}}\,,$$ where $|\vec u|=\sqrt{{u_1}^2+{u_2}^2+{u_3}^2}$ by the Pythagorean theorem, and $u_1,u_2,u_3$ are the coordinates of $\vec u$. Also, if points are given by coordinates, the coordinates of vector $\vec{AB}$ can be calculated as $B-A$ (coordinatewise). If $\vec n$ is a normalvector of the plane, then the angle between the plane and a vector $\vec u$ is $90^\circ-\angle(\vec u,\vec n)$. For cute angles $\alpha,\beta$ we have $\ \alpha<\beta \iff \cos\alpha>\cos\beta$. So that we can order the angles by their cosinus, and no specific need for $\arccos$. We also have $\ \vec u\cdot\vec n>0 \ \iff\ \vec u$ is in the same halfspace as $\vec n$, $\ \vec u\cdot\vec n<0 \ \iff\ \vec u$ is in the opposite halfspace as $\vec n$, $\ \vec u\cdot\vec n=0 \ \iff\ \vec u$ is on the plane.
H: Proving $q\Rightarrow r \models (p\land q) \Rightarrow (p \land r)$ using only natural deduction. I'm trying to prove $$q\Rightarrow r \models (p\land q) \Rightarrow (p \land r)$$ using only the natural deduction rules in this handout. Any hints? I am not allowed to do transformational stuff, such as converting everything to CNF or DeMorgan's, unless they are proven. (I'm sure that will make things easier!). AI: $(1)\; q\rightarrow r \quad\text{premise}$ $(2)\; p\land q\quad\text{Assumption}$ $(3)\; p \quad\text{Conjunction Elimination}, (2)$ $(4)\; q\quad \text{Conjunction Elimination}, (2)$ $(5)\; r \quad \text{Modus Ponens}, (1), (4)$ $(6)\; p\land r\quad \text{Conjunction Introduction}, (3),(5)$ $(7) \;(p \land q) \rightarrow (p\land r) \quad\text{Conditional Introduction}, (2 - 6)$ Remark: Note that when the proposition you are asked to prove is an implication, that's almost always a sign-post that, after listing the premise(s), you'll want to begin with a sub-proof, led off by the assumption of the antecedent, with the aim of arriving at the consequent.
H: Prove $a-b$ is prime Let $n\in\Bbb{N}^{>1}$,$P_n$# is the product of the first $n$ primes,$P_n$ is $nth$ prime number.Suppose $a$ and $b$ are coprime, and $a*b$ is multiple of $P_n$# and $\sqrt{a-b}<P_n$ ,then $a-b$ is prime. A Formula for Prime Numbers? AI: $\sqrt{a-b} \le p_n\ $ so $\,a\!-\!b,\,$ if composite, has a prime factor $\,p \le p_n.\,$ So $\,p\mid P_n^{\#}\mid ab,\,$ so $\,p\mid a\,$ or $\,b,\,$ hence also the other, $ $ by $\ p\mid a-b.\,$ Therefore $\ p\mid a,b,\,$ contra hypothesis $\,\color{#c00}{(a,b) = 1.}$ Or, more conceptually, by Euclid's Lemma $\,(a\!-\!b,ab)=1\,$ by $\,(a\!-\!b,a)=\color{#c00}{(a,b)}=(a\!-\!b,b).\,$ Therefore $\,a\!-\!b\,$ is also coprime to the factor $\,P_n^{\#}\,$ of $\,ab,\,$ so $\,a\!-\!b\,$ has no prime factor $\le p_n.$
H: What can we say about the order of a group given the order of two elements? If I know that a group of finite order has two elements $a$ and $b$ s.t. their orders are $6$ and $10$, respectively. What statements can be made regarding the order of the group? I know by Lagrange's that the elements should divide the order of the group, so I've taken the $\operatorname{lcm}$. I think the order of our group should be a multiple of $30$. But I'm thinking there's more I can say. AI: You can't say anything about the order of the group except that it must be divisible by $6$ and $10$. In fact, nothing can be said about the order of $ab$, either! (see the commentary here.)
H: Which book to read on quantum-related mathematics Recently I watched the "Big Bang Theory" and decided to google about quantum mechanics. It really intrigued me. But I also understood that I am too stupid to understand even the basic mathematics in there. So I decided first to improve my knowledge of mathematics. I actually had a lot of courses on calculus which were not very deep (the last topic was Fourier if i remember correctly and Z-transform) + a short course on statistics (Basic things like distributions). This is definitely not enough to understand all the basic proofs I encountered. The problem is I could not google anywhere which books for beginners I can read in order to understand the basics of all those mathematics. I know there are a lot of parts of maths, so could you help me out - what shall I start with - which part of maths is connected tightly to quantum mechanics? Would be very grateful for the links to the books which a dummy could understand (i mean difficult books would do if they are preceeded by links to books for dummies). This question might seem stupid, but I think this place is the best to ask it. AI: See this: Mathematics for Quantum Mechanics
H: Do the tangents of two circles define concentric circles? Given two non-overlapping circles, $R_1$ and $R_2$. The radii of $R_1$ and $R_2$ may be different. The distance between the centers of $R_1$ and $R_2$ is defined as $x$. Draw the four tangents between $R_1$ and $R_2$. There will be two tangents that cross between $R_1$ and $R_2$ and two tangents that do not cross between $R_1$ and $R_2$. Call the two tangents that cross inner tangents and the two tangents that do not cross outer tangents. I assert that there are two concentric circles that can be drawn, $C_1$ and $C_2$. $C_1$ will have the four tangents points of the inner tangents on its circumference and $C_2$ will have the four tangent points of the outer tangents on its circumference. I remember solving this problem using high school geometry, basic algebra and some trig, but that was over $20$ years ago. Is my assertion correct? If so, what is the solution? I vaguely remember that one key point was noting that radii that intersect at tangent points are perpendicular to the tangent line. AI: For unequal radii it is equivalent to the perpendicular bisectors of all the inner and outer tangent segments intersecting at one point. By symmetry this point would be on the line of centers and one need only check it for one inner and one outer segment. In fact it seems the point should be the midpoint of the line segment joining the centers, and once you have observed that it is easy to prove, because the perpendicular bisector is parallel to the radii connecting the centers to the tangent points, and halfway between the two lines extending those radii. This argument holds also in the case of equal radii.
H: Uniform Continuity of $1/x^3$ on compact intervals. So, I have the function $f : (0,\infty) \to \Bbb R$. With $f(x) = \frac{1}{x^2}$ and I have to show that $f(x)$ is uniformly continuous on the set $[1,\infty)$. Is the following proof correct? $[1,\infty) = \cup_{n=1}^{\infty} [n,n+1]$. And each of those sets is compact, and $f(x)$ is continuous so by Heine-Cantor it is UC on every compact set, and thus it is UC on the whole union of them. Is this coherent/consistent? AI: Knowing that it is uniformly continuous on a sequence of compact sets isn't sufficient to know that it is uniformly continuous on their union. Consider the fact that $$(0,\infty)=\bigcup_{n=1}^\infty\left[\frac1n,n+1\right]$$ with your given function to see why not. I recommend proceeding in the $\epsilon$-$\delta$ fashion, instead. Since your function gets less and less steep as you move to the right, it should suffice to find a $\delta$ that works in $[1,2],$ and show that it actually works everywhere in $[1,\infty).$
H: Find the equation of a circle given two points and a line that passes through its center Find the equation of the circle that passes through the points $(0,2)$ and $(6,6)$. Its center is on the line $x-y =1$. AI: Hint: Find the equation of the line passing through the two given points. This is a chord of the circle. Using the midpoint formula and the fact that perpendicular lines have slopes that are negative reciprocals of each other, find the equation of the perpendicular bisector of this chord. This new line is guaranteed to pass through the centre of circle. Thus, the intersection of this new line and the given line $x-y=1$ must be the centre. After that, you can find the radius by using the distance formula between the centre and one of the given points.
H: Proving that $\frac{n!-1}{2n+7}$ is not an integer when $n>8$ How can I prove that If $n$ is a positive integer such that $$n>8$$ then $$\frac{n!-1}{2n+7}$$ is never an integer? Some of the first things that came to my mind is that $n!-1$ is not divisible by all numbers from $2$ to $n$, so if $$2n+7<n^2$$ That is, $n>3$ then $2n+7$ must be a prime number in order to actually get an integer. But thats all i have got until now, after checking some basic number theory theorems, I even tried linking it to Wilson´s theorem, unsuccesfully. Any hint or idea will be very appreciated AI: As you observed, unless $2n+7$ is prime, you get $2n+7|n!$. This is clear, since if $2n+7$ is not prime, you can write it as $2n+7=ab$ and since neither $a$ nor $b$ are 2, both must be less than $n$. As Jonas pointed, if $a \neq b$ the conclusion is immediate, while if $a=b$ we have $a^2=2n+7$ and one can easily conclude that $2a \leq n$. So lets cover the case $p=2n+7$ is prime. Then $n=\frac{p-7}{2}=\frac{p-1}{2}-3$. Then you get $$n! \equiv 1 \pmod p$$ thus $$(\frac{p-1}{2}-3)! \equiv 1 \pmod p \,.$$ $$(\frac{p-1}{2})! \equiv \frac{p-5}{2}\frac{p-3}{2}\frac{p-1}{2} \equiv 8^{-1}(-15) \pmod p \,.$$ Square both sides: $$[(\frac{p-1}{2})!]^2 \equiv 64^{-1}(15)^2 \pmod p \,.$$ The LHS is exactly $(p-1)! (-1)^{\frac{p-1}{2}}$. Thus $$(-1)^{\frac{p+1}{2}} \equiv 64^{-1}(15)^2 \pmod p \,,$$ or $$\pm 64 \equiv 225 \pmod p$$ This means that $p$ is a divisor of $225 \pm 64$. Note that $p=2n+7>23$. This reduces the problem to few cases to check.
H: basic doubt about topological manifold In his book "Introduction to Smooth Manifolds", J.M. Lee defines a topological manifold to be a second countable, Hausdorff space with every point having a neighbourhood homeomorphic to an open subset of $\mathbb{R}^{n}$ for some $n$. I was wondering should it not say that it should be homeomorphic to a connected open subset of $\mathbb{R}^{n}$? He also mentions that equivalent definitions are obtained if one replaces open subset by whole of $\mathbb{R}^{n}$ or by the open unit ball. I cannot see how this can happen as a connected open subset cannot be homeomorphic to open unit ball or whole of $\mathbb{R}^{n}$. AI: Well, as I was typing the question I got an insight! We could always choose the connected component of the open set of the manifold which is homeomorphic to an open subset of Euclidean space! Should this question be closed?
H: number of subgroups index $p$ equals number of subgroups order $p$ I'm doing an exercise in Dummit's book "Abstract Algebra" and stuck for a long time. I think I'm doing in the right way but I can't finish it. Hope someone can help me. I really appreciate it. Let $A$ be a finite abelian group and let $p$ be a prime. Let $A^{p} = \{a^{p}\mid a \in A\}$ and $A_{p} = \{x\mid x^{p} = 1\}$. Prove that $A/A^{p}$ is isomorphic to $A_{p}$, and the number of subgroups of $A$ of order $p$ equals the number of subgroups of $A$ of index $p$. I can prove that $A/A^{p}$ is isomorphic to $A_{p}$, and every subgroups order $p$ of $A$ must be subgroups order $p$ of $A_{p}$. So the number of subgroups order $p$ of $A$ equals number of subgroups order $p$ of $A_{p}$. Moreover because of the previous result, we must have this number equals number of subgroups order $p$ in $A/A^{p}$. So we try to build a bijection from the set of all subgroups order $p$ of $A/A^{p}$ into set of all subgroups index $p$ of $A$. I think that it's possible, because every subgroup $N$ of $A$ is normal and $A/N$ is a group order $p$. Can anyone help me go on in this way to solve this problem. I know there's a solution in Project Crazy Project, but I think that solution is cumbersome and not beautiful. Thanks AI: Consider the homomorphism "multiplication by $p$'' from $A$ to itself; the kernel is $A_p$ and the cokernel (quotient of the codomain by the image) is $A/A^p$. Since $A$ itself is finite, an analogue of the rank nullity theorem in linear algebra shows that $A_p$ and $A/A^p$ have the same order. Since they are abelian groups in which every element is killed by $p$, they can be thought of as vector spaces of the same finite dimension over $\mathbb F_p$. Now subgroups of order $p$ are contained in $A_p$, and so are precisely the one-dimensional $\mathbb F_p$-vector subspaces of $A_p.$ On the other hand, subgroups of index $p$ have to contain $A^p$, and so are in bijection (under the quotient map) with the codimension one subspaces of $A/A^p$. So now you are reduced to checking that if $V$ is a finite-dimensional vector space over $\mathbb F_p$, the number of one-dimensional subspaces and the number of codimension one subspaces are the same. This can be checked by using the fact that $V^*$ (the dual space to $V$) and $V$ are (non-canonically) isomorphic, and that the one-dimensional subspaces of $V$ are put in bijection with the codimension one subspaces of $V^*$ by considering annihilators. [Note: this answer is the same in spirit as Chris Godsil's, but I have replaced duality theory for finite abelian groups by linear algebra over $\mathbb F_p$, which might be more familiar.]
H: Subgroup of elements of order at most $2^{m}$ The problem A5 in Putnam 2009 reads as follows: Is there a finite abelian group $G$ such that the product of the orders of all its elements is $2^{2009}$? The answer is No. I am reading the official solution here. The solution starts by observing that if such group existed, it would be a 2-group. By structure theorem for finitely generated abelian groups, we may write $$G\cong\prod_{i=1}^{\infty} (\mathbb{Z}/2^{i}\mathbb{Z})^{e_i}$$ for some nonnegative integers $e_1, e_2, ...$ all but finitely many of which are $0$. I am having trouble understanding the step that immediately follows: For any nonnegative integer $m$, the elements of $G$ of order at most $2^m$ form a subgroup isomorphic to $$\prod_{i=1}^{\infty} (\mathbb{Z}/2^{\min(i, m)}\mathbb{Z})$$ I can see that the elements of order at most $2^{m}$ forms a subgroup. But I don't see why this subgroup must be isomorphic to the group $\prod_{i=1}^{\infty} (\mathbb{Z}/2^{\min(i, m)}\mathbb{Z})$. Can someone shed light on this matter? Perhaps even show an explicit isomorphism if possible? I appreciate any input. AI: Just look at each factor individually. The subgroup of $\Bbb{Z}/2^i\Bbb{Z}$ if elements of order at most $2^m$ is isomorphic to $\Bbb{Z}/2^{\min(i, m)}\Bbb{Z}$, because, as a subgroup of a cyclic group, this must be cyclic, and if $ i < m$ then all $2^i$ elements have order at most $2^m$, otherwise only $2^m$ of them do because $\Bbb{Z}/2^i\Bbb{Z}$ is cyclic, so the subgroup is generated by a single element $g$, and, because $g$ has order $2^m$, the subgroup generated by $g$ must have $2^m$ elements (if $g$ had order less that $2^m$, then all of the elements of the subgroup would also have order less than $2^m$, but clearly, if $i\ge m$, $\Bbb{Z}/2^i\Bbb{Z}$ has an element of order exactly $2^m$). Since the order of an element of a product of groups is the LCM of the orders of it's projections onto the factors, an element of order at most $2^m$ in $G$ is given by simply choosing an element of order at most $2^m$ from each factor, which is exactly the desired isomorphism.
H: Computing taylor series, getting all 0's I started out by finding the first and second derivative. For $f'(x)$ I got $\;\;\dfrac{(12x^2-x^4)}{(4-x^2)^2}$ For $f''(x)$ I got $\;\;\dfrac{(4-x^2)(24x-4x^3)-(12x^2-x^4)(-4x) }{ (4-x^2)^3}$ After evaluating $f'(0)$ and $f''(0)$ I got $0$ for both of those ($f'(0)=0$ and $f''(0)= 0$), also for $f(0)$ I got $0$. This is confusing me because if I go to plug the values into the Taylor series function I just get $0 + 0x + \dfrac{(0*x^2)}{(2!)}$. Is that actually right or did I go wrong somewhere? AI: You can certainly compute derivatives to find the Taylor Series about $c=0$ for this function. However, this gets nasty very quickly, and as you astutely noticed, the first three coefficients of the Taylor Series expansion are $0$! It's not easy to readily notice a pattern from computing derivatives, and you might be wondering if the derivatives at $0$ evaluate to $0$ in this fashion forever, or if there are eventually non-zero coefficients in this Taylor expansion. Is there an easy way to see this, and an easy way to do this problem? Well, yes, assuming you are familiar with the Taylor series for $\frac{1}{1-x}$. Note that we can rewrite your expression as $\frac{1}{4}x^{3} \cdot \frac{1}{1-(\frac{1}{4}x^{2})}$. Recall that the Taylor series expansion of $\frac{1}{1-x}$ about $c = 0$ is given by $$\sum_{i=0}^{\infty} x^{i}$$ Hence, our Taylor Series expansion about $c = 0$ for the expression we're given is in fact: $$\frac{1}{4}x^{3} \cdot \sum_{i=0}^{\infty} (\frac{1}{4}x^{2})^{i}$$ You can easily see now why the coefficients on $1, x,$ and $x^{2}$ are $0$.
H: Pointwise convergence not enough to show incompleteness of continous functions from $[0,1]$ under the $L^2$ norm. If we want to show the space of continuous functions from $[0,1]$ under the $L^2$ norm $(f,g) = \int_0^1 f \bar g$ is not complete, then we have to find a Cauchy sequence in this space which does not converge in $C[0,1]$. If we take the sequence to be: $f_n (x) = \begin{cases} 0 &\ \text{if}\ x\in [0, \frac{1}{2}),\\ n(x-\frac{1}{2}) &\ \text{if}\ x\in [\frac{1}{2}, \frac{1}{2}+\frac{1}{n})\\ 1 &\ \text{if}\ x\in [\frac{1}{2}+\frac{1}{n},1]. \end{cases}$ Then it is not hard to show this sequence is Cauchy in $C[0,1]$. Next we need to show this sequence does not converge in this space. Intuitively as $n$ approaches infinity this sequence will point wise converge to $f(x) = \begin{cases} 0 &\ \text{if}\ x\in [0, \frac{1}{2}),\\ 1 &\ \text{if}\ x\in [\frac{1}{2}, 1]. \end{cases}$ But $f(x$) is not in the space of continuous functions from $[0, 1]$. But technically this wouldn't be enough for a formal proof since we still haven't showed $f_n(x)$ still cannot converge to a function in the space. My approach to this proof would be by contradiction, but I want help making the technical proof rigorous. AI: Suppose that it did converge to some function, $f \in C[0,1]$ in the $L^2$ norm. Then the integrals $$\int_0^1 |f - f_n|^2$$ go to zero, and so they must go to zero on every subinterval. If $x < \frac{1}{2}$, we can show that $f(x) = 0$, for if not, then by continuity we could choose a small interval about $x$ to make the integral above stay above some positive constant. A similar argument shows $f(x) = 1$ for $x > \frac{1}{2}$, and no continuous function can satisfy this.
H: Adjust a range of given values. If I have a number anywhere on the range 140 - 350 and I want to match it to the correlated range "0 - 360" what function can I run it through? i.e.:140 would go through the function and return 0.350 would go through the function and return 360.245 (midpoint of Range 1) would go through the function and return 180 (midpoint of Range 2). I have a feeling the answer will involve using percentages. AI: You are correct, assuming you want the function to be linear. Let $f(x)$ be your function. We are given that $f(140)=0$ and $f(350)=360$. Using the slope formula, we obtain: $$ m=\dfrac{360-0}{350-140} = \dfrac{360}{210} = \dfrac{12}{7} $$ Using the point-slope formula, we obtain: $$ f(x)-0 = \dfrac{12}{7} \cdot (x-140) $$ or simply: $$ f(x)= \dfrac{12}{7}x-240 $$
H: Solving $0.0004<\frac{4,000,000}{d^2}<0.01$ From Stewart Precalculus, P86. This question is from the chapter about inequalities and we are supposed to set up a model using inequalities to solve this problem. my working out was $$0.0004<\dfrac{4,000,000}{d^2}<0.01$$ In this chapter, the author did not show us how to solve a pair of simultaneous inequalities involving a quotient and I don't know how to do it, the only way I can think of is to solve the problem by separating the above pair of inequality into two separate single inequality, is this what I am suppose to do here? $$0.0004<\dfrac{4,000,000}{d^2},\\\dfrac{4,000,000}{d^2}<0.01$$ Thanks. AI: $$0.0004<\dfrac{4,000,000}{d^2}<0.01$$ when we reciprocal a fraction then sign of inequality changes:$x<y\implies\dfrac1x>\dfrac1y$ $$\dfrac{1}{0.0004}>\dfrac{d^2}{4,000,000}>\dfrac{1}{0.01}$$ multiply wach term by $4,000,000$ $$\dfrac{4,000,000}{0.0004}>\dfrac{d^2}{1}>\dfrac{4,000,000}{0.01}$$ $$10^{10}>d^2>4\times10^8$$ $$4\times10^8<d^2<10^{10}$$ $$\sqrt{4\times10^8}<d<\sqrt{10^{10}}$$ since d=distance it can't be negative so we take only positive value after square root $${2\times10^4}<d<{10^{5}}$$ $$20000<d<100000$$
H: Strong equidistribution of points on the n-sphere The vertices of a Platonic solid are equally distributed on its circumscribing sphere in a very strong sense: each of them has the same number of nearest neighbours and all distances between nearest neighbours are the same. It seems clear to me that the Platonic solids also provide the only examples for such equidistributed arrangements of points on the sphere, i.e. (∗) One can equidistribute (in the strong sense above) only 4, 6, 8, 12, or 20 points on the 2-sphere. The latter is - thus - a consequence from the fact that there are only five Platonic solids, which in turn is a consequence of Eulers theorem, for which in turn there is a whole bunch of proofs. My questions are: (1) Are there other "independent" and maybe more "direct" proofs of (∗)? (2) Are there corresponding results for general $n$-spheres? I assume that one can equidistribute on every $n$-sphere $n+2$ points (the generalized tetrahedron) and $2^{n+1}$ points (the generalized cube). (3) Are there other such (maybe partial) functions $f_3(n), f_4(n), f_5(n),\dots$ with e.g. $f_3(2) = 6, f_4(2) = 12, f_5(2) = 20$? AI: Unless I misunderstood your definition of strong equidistribution I think that your claim that 20 is the maximum cardinality is wrong. Edit: A trivial example is that $n$ equally spaced points along the equator (with their longitudes differing by $2\pi/n$) are strongly equidistributed according to this definition. You can choose $n$ to be as large as you wish. For a more interesting example consider the Icosahedral group $G$ of order 120 acting faithfully on $S^2$. If you pick any point $x$ on the surface of the sphere and consider the orbit $C=Gx$, it will have the strong equidistribution. This is because $G\le O_3(\mathbb{R})$ and thus you can turn any point of the orbit $C$ to another by rotating and reflecting the sphere (both of these operations preserve all distances, so strong equidistribution follows). The same principle can be extended to other dimensions by using a finite subgroup of $O_n(\mathbb{R})$. Related problems are studied under the umbrella of telcomm applications. I would suspect that a lot of examples of sets with strong equidistribution are to be found within the class of spherical codes. Check out the link to N. J. A. Sloane's database of good spherical codes. Group codes (orbits of finite groups of orthogonal transformations) are an important subclass. A few formalities: Let $G\le O_n(\mathbb{R})$ be a finite group, $x\in\mathbb{R}^n$ an arbitrary initial point, then the associated group code is $$ C=Gx=\{gx\mid g\in G\}. $$ Because $G$ is a group we have $Gy=C$ for all $y\in C$. Because $G\le O_n(\mathbb{R})$ we have $$ d(x,y)=d(gx,gy) $$ for all the vectors $x,y\in\mathbb{R}^n$ and all the elements $g\in G$. Claim 1. Let $y\in C$ be arbitrary. The distances between closest neighbor $z\in C$ to $y$ is the same for all $y$. Proof. If $y'\in C$ is the closest neighbor of $y$, and $g\in G$ is such that $gy\in z$ (such an element exists by construction of $C$), then $$ d(z,gy')=d(gy,gy')=d(y,y'), $$ so $z'=gy'$ is at the same distance from $z$ as $y'$ from $y$. There cannot be points of $C$ any closer to $z$, because then we could switch the roles of $z$ and $y$ and contradict ourselves. Claim 2. Let $x_1,x_2,\ldots,x_n$ be all the points of $C$ at minimum distance $d$ from $x$. Let $y\in C$ be arbitrary. Then there are exactly $n$ points of $C$ at distance $d$ from $y$ also. Proof. Again we select an orthogonal transformation $g\in G$ such that $gx=y$. Then all the points $y_i=gx_i,i=1,2,\ldots,n$ are at distance $d$ from $y$. They are also all distinct. There cannot be more than $n$ points at distance $d$ from $y$, because then we, again, contradict ourselves by reversing the roles of $x$ and $y$.
H: why is this Markov Chain aperiodic I have this Matrix: $$P=\begin{pmatrix} 0 & 1 \\ 0.3 & 0.7 \end{pmatrix}$$ this markov chain is said to be aperiodic, I dont understand how it comes to it. Period $\delta$ is the gcd of the set of all diagonal elements, right? if $\delta>1$, $P$ is periodic, if $\delta=1$, then aperiodic. but here it is not $\delta=1$, is it? or do i have to transit the matrix to some certain form? AI: Since my comment provided sufficient clarification: When there's a stationary state, your system will evolve towards that state. In your case, the two left eigenvectors are $(−1,1)$ and $(3,10)$ with corresponding eigenvalues $−0.3$ and $1$. Every other state of the system can be decomposed into those two states. The first state exihibits oscillating behaviour, but it dies out as $0.3<1$. The other state is stationary. So whatever your initial state, you'll evolve towards that stationary state.
H: Sum of distances from triangle vertices to interior point is less than perimeter? Let $M$ be a point in the interior of triangle $ABC$ in the plane. Prove $AM+BM+CM<AB+BC+CA$. The above question was posed to someone I know who is taking high-school Euclidean geometry. I'm not sure what theorems she can rely on in her proof (though they all follow from Euclid's axioms anyway), but I do know that she does not use trigonometry at all. She turned to me (a mathematician by training) for help; and I can't seem to prove it. So I turn to you all for a proof (using facts from high-school geometry only). One thing I can prove is that $\sup(AM,BM,CM)<\sup(AB,BC,CA)$. Indeed, say the longest of the interior segments is $\overline{AM}$. Drop an altitude (perpendicular) from $A$ to the point $D\in\overline{BC}$, and consider the side — $\overline{AB}$ or $\overline{AC}$ — such that $\overline{AM}$ lies between $\overline{AD}$ and that side. (If $\overline{AM}\subset\overline{AD}$, consider either $\overline{AB}$ or $\overline{AC}$.) Say it's $\overline{AB}$. Then examining right triangle $ADB$ shows easily that $AM<AB$. However, I can't seem to prove that each of the three sides can be used in turn for one of the interior segments in that proof — which would suffice for the problem above. Another idea I had was to prove that angle $AMB$ is strictly larger than angle $ACB$ (and likewise for the other two angles) and to use that to prove the claim. But I can't seem to do either: neither to prove the inequality of angle measures, nor, assuming that inequality, to prove the claim sought. Any help would be much appreciated — again, using high-school geometry only. I suspect there's a simple proof I'm not seeing. AI: The method is to prove $AC+BC>AM+BM$: Extend $AM$, let $ME=MB \implies \angle MBE=\angle MEB$, $AM$ cross $BC$ at $F$ (because $M$ is inside of $\triangle ABC$). If $E$ is on $MF$, then $AF\ge AE=AM+ME$. $BC>CF \implies BC+AC> FC+AC>AF \ge AE=AM+BM$, If $E$ is on the extension $MF$, $\angle MBE> \angle CBE, \angle BEC>\angle MEB =\angle MBE > \angle CBE$, so in $\triangle CEB$, $BC>CE, \implies BC+AC>AC+CE>AE=AM+BM$. So we've proven $AC+BC>AM+BM$. For the same reason, $BC+AB>AM+MC$, $AB+AC>BM+MC$. Finally, we have $AC+BC+AB>AM+BM+CM$. edit1: more simple way: $AC +CF> AF =AM +MF, MF + BF > BM \\ \implies AC+CF+BF+MF > AM+BM+MF \\ \implies AC+BC> AM+BM$
H: Show that the $k$th forward difference of $x^n$ is divisible by $k!$ Define the forward difference operator $$\Delta f(x) = f(x+1) - f(x)$$ I believe that if $f(x)$ is a polynomial with integer coefficients, $\Delta^k f(x)$ is divisible by k!. By linearity it suffices to consider a single monomial $f(x) = x^n$. I've checked this for small values of $n$ and $k$, and believe that a simple proof should exist, but am unable to find it. In particular, brute force gives $$\Delta^k x^n = \sum_j x^{n-j} \left[ \binom{n}{j} \sum_i (-1)^{k-i} \binom{k}{i} i^j \right]$$ but the terms in brackets appear to have no closed form solution (see (20)-(25) of http://mathworld.wolfram.com/BinomialSums.html). Motivation: I have an unknown integer coefficient polynomial of degree $n$ sampled at $x = 0, 1, \ldots, n$, and want to prove that all intermediate results in the classical divided difference algorithm are integers. AI: By linearity, it suffices to prove this for the polynomials $x(x - 1)\cdots(x - (n-1))$. This is just $n! {x \choose n}$. A basic property of the forward difference operator is that $\Delta {x \choose n} = {x \choose n-1}$, from which it follows that $$\Delta^k x(x - 1)\cdots(x - (n-1)) = n! {x \choose n-k} = k! {n \choose k} x(x - 1) \cdots(x - (n-k-1))$$ and the conclusion follows.