text
stringlengths
83
79.5k
H: A linear holomorphic function If $f$ is a holomorphic function on a simply connected open domain $\Omega$, and $f$ is a linear function on the boundary $\Omega$, i.e. $f=az+b$ on $\partial\Omega$. Then, can I say that $f$ is also linear function on $\Omega$? AI: Since $f(z)=az+b$ on the boundary, we know $g(z):=f(z)-az-b=0$ on $\partial \Omega$. By the maximum modulus principle, $g(z)=0$ for all $z\in\Omega$, or equivalently, $f(z)=az+b$.
H: Does this logic have the downward Skolem-Löwenheim theorem? Let $\mathcal L_Q$ denote the logic obtained from adding the quantifier $\newcommand{\almost}{\forall^\infty}\almost$ to the usual first-order logic, where the semantic interpretation of $\almost x\varphi$ is "All but finitely many $x$ satisfy $\varphi$", or formally: $$M\models\almost x\varphi(x)\iff\Big|\{m\in M\mid M\not\models\varphi[m]\}\Big|\text{ is finite}.$$ It's not very hard to show that this logic is not compact$^*$, and does not satisfy the upward Skolem-Löwenheim theorem (e.g. the order $(\Bbb N,\leq)$ has a categorical axiomatization). But what about its downward counterpart? According to Lindström theorem either compactness fails, or the downward Skolem-Löwenheim theorem should fail. One fails, what about the other? (*) Please don't discuss the failure of the compactness theorem for $\mathcal L_Q$ here before June 17th, 2013. I gave that part as a homework assignment to my students - some of whom are reading this site. AI: I think the following idea should give a proof of the downward Löwenheim-Skolem theorem for this logic, but I haven't checked it carefully, so I apologize if it contains a stupid mistake. Suppose I have an uncountable structure $\mathfrak A$ for a countable language and I want a countable substructure that is elementary with respect to your logic $\mathcal L_Q$. For each $\mathcal L_Q$-formula $\phi(\vec x)$ where $\vec x$ represents a sequence of free variables, create a new predicate symbol $P_\phi$ with arity equal to the length of $\vec x$, and let $\mathfrak A^+$ be the expansion of $\mathfrak A$ to the enlarged language, obtained by interpreting $P_\phi$ as synonymous with $\phi$. Note that the enlarged language is still countable, so $\mathfrak A^+$ has a countable elementary substructure $\mathfrak B^+$, where "elementary" means in the usual sense of just first-order logic. Now it seems to me that the reduct of $\mathfrak B^+$ to the original language serves as an $\mathcal L_Q$-elementary substructure of the original $\mathfrak A$. The point is that any specific use of $\forall^\infty$ amounts to first-order information. More precisely, suppose $\phi(x)$ is $(\forall^\infty y)\,\psi(x,y)$. Then whenever $\phi$ holds in $\mathfrak A$ of a particular element $a$, the number of values for $y$ that don't satisfy $\psi(a,y)$ is a specific finite number, and it is expressible in first-order logic in $\mathfrak A^+$ that the number of values of $y$ violating $P_\psi(a,y)$ is this specific number. So that information remains true in $\mathfrak B^+$. Similarly, if $\phi(a)$ fails in $\mathfrak A$, then for every natural number $n$ it is true in $\mathfrak A^+$ that there are more than $n$ values of $y$ violating $P_\psi(a,y)$; this is, for each $n$, first-order information and therefore still true in $\mathfrak B^+$. These observations should yield an inductive proof that the interpretations in $\mathfrak B^+$ of all the new $P_\phi$ predicates agree with the corresponding $\mathcal L_Q$-formulas $\phi$, just as in $\mathfrak A^+$. And that should imply that $\mathfrak B$ is an $\mathcal L_Q$-elementary submodel of $\mathfrak A$.
H: Find a vector $y$ such that $g(x)=\langle x,y \rangle$ for all $x \in V$ $V$ is an inner product space and $g: V \rightarrow F$ is a linear transformation. Find a vector $y$ such that $g(x)=\langle x,y \rangle$ for all $x \in V$ $V=P_2(R)$ with $\langle f,h \rangle=\int_{0}^{1}f(t)h(t)dt$, $g(f)=f(0)+f'(1)$. I know, by formula, $y=\sum_{i=1}^{n} \bar{g(v_i)}v_i$ where {$v_1,\dots,v_n$}=$\beta$ is an orthonomal basis. In that problem, the basis $\beta=(1,x,x^2)$ is not orthonormal so I have to use gram-schmidt and normalize it. I tried to calculate them but the answer $y=210x^2-204x+33$ is not achievable. Of course there can be other ways to solve this, but I want to solve it by standard way (using gram-schmidt). I need your help. How to get the answer? AI: You can solve the problem directly, albeit you need to invert a $3 \times 3$ matrix: Let $e_k(x) = x^k$. If $f=\sum_{i=0}^2 f_i e_i$, then $g(f) = f_0+f_1+2 f_2$. If $y = \sum_{i=0}^2 y_i e_i$, then $\langle f,y \rangle = \sum_{i=0}^2 \sum_{j=0}^2 f_i y_j \int_0^1 x^{i+j} dx = \sum_{i=0}^2 \sum_{j=0}^2 f_i y_j \frac{1}{i+j+1}$. We want to find $y$ such that $g(e_k) = \sum_{j=0}^2 y_j \frac{1}{k+j+1}$. Note that $g(e_0) = 1$, $g(e_1) = 1$ and $g(e_2) = 2$. If we let $A = \begin{bmatrix} 1 & \frac{1}{2} & \frac{1}{3} \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} \end{bmatrix}$, and $b = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}$, then the problem can be written as $A \begin{bmatrix} y_0 \\ y_1 \\ y_2 \end{bmatrix} = b$, which has the solution $\begin{bmatrix} y_0 \\ y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 33 \\ -204 \\210 \end{bmatrix} $. Hence $y = 33 e_0 -204 e_1 + 210 e_2$.
H: Does there exist a function $f : X \to Y$ such that $f \in Y$? Does there exist a function $f : X \to Y$ such that $f \in Y$? I think this is related to Russell's paradox, but I'm not exactly sure how. Added Later: As Brian points out, given any function $f : X \to Y_0$, we can just add $f$ to the codomain, that is $f : X \to Y_0\cup\{f\}$ is a function which satisfies my initial request. However, what if we try to find a surjective function $f : X \to Y$ such that $f \in Y$? AI: Let $V_\omega$ be the set of hereditarily finite sets (i.e. sets which are finite, and all their elements are finite, and so on). It is immediate from this definition that if $x\in V_\omega$ and $y\in x$ then $y\in V_\omega$, and if $x\subseteq V_\omega$ is finite, then $x\in V_\omega$. It's not hard to show that if $X\in V_\omega$ then $\mathcal P(X)\in V_\omega$, and if $X,Y\in V_\omega$ then $X\times Y\in V_\omega$. Take any set $X\in V_\omega$, then any function $f$ from $X$ into $V_\omega$ is actually a function into a finite subset of $V_\omega$, therefore there exists a finite $Y\in V_\omega$, such that $f\subseteq X\times Y$. Therefore $f\in V_\omega$. For the addition, suppose that $f\colon X\to Y$ and $f\in Y$. If there exists some $x\in X$ such that $\langle x,f\rangle\in f$, then this a contradiction to the axiom of regularity because $\langle x,f\rangle =\{\{x\},\{x,f\}\}$ by the Kuratowski definition of an ordered pair, because this means that $$f\in\{x,f\}\in\langle x,f\rangle\in f.$$ In particular $f$ cannot be surjective. Note that regularity is necessary for this proof, otherwise we can have a set $X=\{X\}$. Note that in this case: $$\langle X,X\rangle=\{\{X\},\{X,X\}\}=\{\{X\}\}=\{X\}=X.$$ Therefore $X\times X=X$ which is the identity function from $X$ onto itself.
H: What law of algebra of proposition is happening here? I'm preparing for a test tomorrow and going over some reading material, and I came across this problem that was worked out. So far I think I'm following each step of logic, but I've hit a wall with this part: (p $\land$ ($\lnot$(r $\land$ q))) $\lor$ (($\lnot$p $\lor$ (r $\land$ q))) is logically equivalent to (p $\lor$ $\lnot$p $\lor$ ((r $\lor$ q)) $\land$ (($\lnot$(r $\land$ q)) $\lor$ $\lnot$p $\lor$ (r $\lor$ q)) I'm stumped as to what law is applied that allows you to go from the first and conclude the second. AI: if you make $s=r\wedge q$, then the first statement is written as $(p\wedge\neg s)\vee(\neg p\vee s)=(p\wedge\neg s)\vee\neg(p\wedge\neg s)=1$. This last equality is due to the fact that $x\vee\neg x=1$ for all $x$. If you also make $t=r\vee q$, the second statement can be written as $p\vee\neg p\vee t\wedge(\neg s\vee\neg p\vee t)=1$, this last equality is due to the fact that $p\vee\neg p=1$ and $1\vee\cdot=1$
H: Concept about series test I have five kind of test here 1. Divergent test 2. Ratio test 3. Integral test 4. Comparison test 5. Alternating Series test And a few questions here. 1. Are test 1,2,3,4 only available for Positive Series? and alternate series test is only for alternating series? 2. To show $\sum_{n=1}^{\infty}(-1)^n$ diverge I can't use alternating series test right? it just tell me the series doesn't converge. So, i tried to use divergence test? but it seems like edivergence test is not applicable for alternating series. AI: I assume that for (1) you mean the theorem that says that if the $n^\text{th}$ term does not approach 0 as $n \to \infty$ then the series diverges. This test does not require the terms to be positive, so you can apply it to show that the series $\sum_{n=1}^\infty (-1)^n$ diverges. The ratio test does not require the terms to be positive. You end up taking the absolute value in this test, so signs do not matter. The usual formulations of the integral test and comparison test only apply to series with positive terms. The alternating series test is only for alternating series, as the name suggests. It has a couple of other requirements also. The alternating series test never tells you that a series diverges. If the hypotheses are met, then the conclusion is that the series converges.
H: Computing integral of $2$ - form on a torus I am looking at problem 16-2 of Lee's Smooth Manifolds, second edition. Problem 16 - 2: Let $\Bbb{T}^2 \subseteq \Bbb{R}^4$ be the two torus defined as the set of points $(w,x,y,z)$ such that $w^2 + x^2 = y^2 + z^2 = 1$, with the product orientation determined by the standard orientation on $\Bbb{S}^1$. Compute $\int_{\Bbb{T}^2} \omega$ where $\omega$ is the following two form on $\Bbb{R}^4$: $$\omega = xyz \hspace{1mm} dw \wedge dy.$$ Now if I want to evaluate such a two form do I need to care about the orientation? I am tempted to set up a map $F : \Bbb{R}^2 \to \Bbb{R}^4$ that sends $\varphi, \theta$ to $(\cos \varphi, \sin \varphi, \cos \theta, \sin \theta)$ and then taking the integral of $\omega$ on the torus to be $$\int_{[0,2\pi]^2} F^\ast \omega .$$ Is my reasoning correct, or do I need to care about the product orientation? AI: To answer your question: yes you have to pay attention to the orientation. First of all you have to observe that $F:[0,2\pi)^2\to\mathbb{T}^2\subset\mathbb{R}^4$ is an isometry (the way you defined it), because of that then you can conclude as you wanted: $$\int_{[0,2\pi)}F^\star\omega$$
H: How would I graph this? How would I graph this: $t^2+3t=40$? I tried factoring $(t-5)(t+8)=0$ but I am not sure how to graph it because the function equals zero. I know how to do it if it is $y=t^2+3t-40$. I am probably overlooking the obvious, any help? Thanks AI: What you have is an equation: you can think of its graph as the points where the function $f(t) = t^2 + 3t$ intersects the constant function $g(t) = 40$. Alternatively, as you've factored it, we can put $f^*(t) = t^2 + 3t -40 = (t-5)(t+8)$ and $g^*(t) = 0$, and graph the points of intersection. These will be given by the zeros of the equation: at $t = 5$ and at $t = -8$. Note that if you graph the factored equation, then the two functions graphed will be altered, but the points of intersection will remain, because the solution to the equation will remain unchanged. NOTE: The only points that satisfy your equation are two points...points that happen to be the intersection of a parabola with a line: so indeed, do graph the parabola, and graph the line. But the key point here (excuse the pun) is that you need to highlight/identify the two points at which the two functions intersect.
H: How to prove this symmetric polynomial equations? I got a problem from a friend, which is to prove that $\Sigma _{i=1}^{n}% \frac{x_{i}^{m}}{\Pi _{j\neq i}(x_{i}-x_{j})}=0$ for m < n-1. I tried to multiply the left of equation with $\Pi _{1\leq i<j\leq n}(x_{i}-x_{j})$, and get $\Pi _{1\leq i<j\leq n}(x_{i}-x_{j})\Sigma _{i=1}^{n}\frac{x_{i}^{m}}{\Pi _{j\neq i}(x_{i}-x_{j})}=\Sigma _{i=1}^{n}\{(-1)^{n-i}x_{i}^{m}\Pi _{p\neq i,1\leq p<j\leq n}(x_{p}-x_{j})\}$ However starting from here I can not conclude that this always equal to zero. I think what I am trying to do is to relate it to some sort of symmetric polynomial properties. But I couldn't figure out how. Any hints on this? Thanks a lot. AI: You're basically talking about the kernel of the Van Der Monde matrix; I just read abut it at https://mathoverflow.net/questions/49255/how-to-determine-the-kernel-of-a-vandermonde-matrix, which may be helpful.
H: Singleton subset of a metric space I am currently working through chapter two of Principles of Mathematical Analysis (ed. 3) by Walter Rudin. My question comes from pages 30-31. I know that a metric space must satisfy the definition: Definition: A metric space is an ordered pair, $(X,d)$, where $X$ is a set (whose elements are called points), and $d$ is a metric. The metric must satisfy the following, given $x,y,z \in X$: (a): $d(x,y) \ge 0$ (b): $d(x,y) = 0$ iff $x=y$ (c): $d(x,y) =d(y,x)$ (d): $d(x,z) \le d(x,y) + d(y,z)$ Rudin then provides the example that euclidean spaces, $\mathbb{R}^k$ are also metric spaces. Following, he then specifies that any subset of a metric space is also a metric space. After proving this statement, I then decided to come up with examples, and as an exercise, try to challenge myself to come up with counter-examples. One attempt that I made was: $\{ \boldsymbol{x} \} \subset \mathbb{R}^k$ where $\boldsymbol{x}$ is a $k$-tuple. This provides the conjecture: Conjecture: Given any $\boldsymbol{x} \in \mathbb{R}^k$, the set $\{ \boldsymbol{x} \} \subset \mathbb{R}^k$ is a metric space. Proof: $d(x,x) = |x - x| = |0| = 0$. Therefore, the only possible metric is $0$ and hence all the definition's conditions are satisfied. I am uneasy about the proof I have constructed because in the definition, two (and once three) elements are drawn from the metric space. So in order for the above to hold, I essentially need to give this element several times. So the primary substance of my question is: in a metric space, can we let an element "do two jobs at once?" AI: You can always let an element do two jobs at once in this sense unless you’re checking a statement that explicitly rules out the possibility, and here you are not. Here, for instance, you have to check a requirement that $d(x,z)\le d(x,y)+d(y,z)$ whenever $x,y,z\in X$; there is no implication here that $x,y$, and $z$ must be distinct points. Indeed, the triangle inequality must hold whether they are distinct of not. The same goes for the other clauses of the definition.
H: Digits in a large power of two I am trying to find the answer to: 2^34359738368. As to be expected every calculator and computer program I have used has crashed. To be honest I don't even want to know the exact answer, I just want to really roughly know the number of digits the answer has. Is there a trick to doing something like this or is it something that should be put into the too hard basket? Reason: I want to know how big of a number the binary for a 4gb file is AI: Are you familiar with logarithms? There is a way to get the exact number of digits in this expression using logarithms. Our number system is written in base 10, i.e., each integer $a$ with $k$ digits and decimal representation $d_{k-1}d_{k-2}\ldots d_{0}$ can be expressed as $10^{0}*d_{0} + 10^{1}*d_{1} + \cdots + 10^{k-1}d_{k-1}$. To get the number of digits, therefore, we need to find the highest power of $10$ appearing in this expression and add $1$ to it. We can easily see that this is given by taking the floor of the base 10 logarithm of $a$ and adding 1, or #digits$(a) = \lfloor \log_{10} a \rfloor + 1$. Using this information, we can now find the number of digits contained in $2^{34359738368}$ by using the properties of logs. We have #digits$(2^{34359738368}) = \lfloor \log_{10} (2^{34359738368}) \rfloor + 1 = \lfloor 34359738368 \log_{10}(2) \rfloor + 1$
H: Simplify Boolean Algebra How do I simplify the following expression with Boolean Algebra? Please show what you used to simplify so I can understand. $$ABC + AB'C' + ABC' + A'B'C'$$ AI: First I want to group the elements that are similar. This will allow me to start reducing the expression. $$ABC + AB'C' + ABC' + A'B'C'$$ $$ABC + ABC' + AB'C' + A'B'C'$$ $$[AB(C+C')] + [B'C'(A+A')] // Group.$$ $$AB + B'C' // α+α' = 1; α * 1 = α $$ Edit: For me Boolean Algebra is superior, but if you ever get stuck on a problem try using a Karnaugh map: http://en.wikipedia.org/wiki/Karnaugh_map If you read the Wiki page you will see that with a Karnaugh map you can simplify this expression.
H: why I always thought polynomials as a function As I've started studying Polynomial Ring on my own, I would like to verify/ask the concept/questions occurred to me. I've noticed over some ring the polynomials are of little/no interests as a function and all we're concerned about is the components obtained using $x^n~(n\ge0)$ as 'separator'. It's the reason why over $\mathbb Z_3$ even though $x^3$ and $x^5$ are identical as a function, differ as polynomial. Consequently, the associated set in a ring of polynomials is the family of all sequences over the ring whose all but finitely many terms are zero. I wonder why I always thought it as a function earlier when I studied the basic polynomials (over $\mathbb R$ or $\mathbb C$) and why its specialty didn't occur to me then. I guess it's because of the behavior of the polynomials over those fields. As I can see that no two distinct polynomials can occur in $\mathbb R$ or $\mathbb C$ which are functionally identical, since any polynomials over $\mathbb R$ or $\mathbb C$ can at most finite number of roots. Am I right? AI: Yes, you are correct. It is worth noting that for many applications it is helpful to think of polynomials as both functions and formal objects. For example: Factor Theorem: Let $f(x)$ be a polynomial over a field $F$. Then $(x-c)\mid f(x)$ (which is a statement about polynomials as formal objects) iff $f(c)=0$ (which is a statement about polynomials as functions). Additionally, the (I believe) most common proof of the Fundamental Theorem of Algebra, which states that any nonconstant polynomial over $\mathbb C$ has a root in $\mathbb C$ uses in a crucial way the fact that real polynomials are continuous functions, in the form of this lemma: Lemma: Any polynomial $f(x)$ over $\mathbb R$ with odd degree has a real root. Proof: Since $f(x)$ has odd degree, the signs of $f(x)$ disagree for very negative $x$ and very positive $x$. Since polynomials are continuous, by the Intermediate Value Theorem we have some $c\in\mathbb R$ such that $f(c)=0$.
H: Proving $C([0,1])$ Is Not Complete Under $L_1$ Without A Counter Example I'd like to show that $C([0,1])$ (that is, the set of functions $\{f:[0,1]\rightarrow \mathbb{R} \, \textrm{ and } \, f \, \textrm{is continuous} \}$ is not a complete mertric space under the $L_1$ distance function: $$ d(f,g) = \int_0^1 |f(x)-g(x)|dx $$ I can find counter examples (for example, here) but would rather prove it using definitions and principles so that I do not have to rely on committing specific degenerate sequences to memory. Since all compact metric spaces are complete, I have to figure that the place to start is to show that $C([0,1])$ is not compact and that somehow an infinite cover allows for a divergent Cauchy sequence. However, I don't how to show this (or if it's even the right approach to take). AI: $C[0,1]$ can be embedded as a subspace of $L^1[0,1]$. It is dense in $L^1$, but not equal to $L^1$. Therefore it is not closed, and hence not complete. Since all compact metric spaces are complete, I have to figure that the place to start is to show that $C([0,1])$ is not compact and that somehow an infinite cover allows for a divergent Cauchy sequence. It is certainly not compact, but the logic is off here. Compact metric spaces are complete, but complete metric spaces need not be compact. For example, think of $\mathbb R$ with its usual metric, or any other nonzero complete normed space.
H: Prove a relation related to sets In a city, among each pair of people, there can be exactly one of k different relationships (relationships are symmetric). A crowd is a set of three people in which every pair have the same relation. Let $R_k$ denote the smallest number of people in the city such that the city always contains a crowd, then prove that $R_k$ $\leq$ [$ek!$] + 1 Thanks in advance. Please help me in putting under proper tag. AI: The "ramsey-theory" tag would be appropriate. (The standard formulation for your problem is in terms of monochromatic triangles in an edge-coloring of a complete graph.) The usual notation for the greatest integer or floor function nowadays is $\lfloor x\rfloor$ not $[x]$. Here are your hints: For $k\in\Bbb N$, let $A_k=\lfloor k!e\rfloor=\frac{k!}{0!}+\frac{k!}{1!}+\dots+\frac{k!}{k!}$. Then $A_1=2$, and $A_k=kA_{k-1}+1$ for $k>1$. Use induction on $k$ to show that, if there are $k$ possible relationships, then any city with $A_k+1$ people must contain a crowd. (Consider one citizen, and use the Pigeonhole Principle to show that he is in the same relationship to at least $A_{k-1}+1$ of his fellow citizens; then apply the Induction Hypothesis.) Conclude that $R_k\le A_k+1=\lfloor k!e\rfloor+1$. By the way, the equality $R_k=\lfloor k!e\rfloor+1$ holds only for $k=1,2$ and $3$.
H: What does apostrophe as a suffix denote? I was just curious as to what "$'$" denotes; i.e. $x' = y$, as in $x'(t) = x(t)$ which has the solution $x(t) = c_1\;e^t$. I've found out that it has something to do with differential equations, but I can't seem to find any information specifically on "$x'$". If someone could provide a source to such information, it would be much appreciated. AI: Typically, $x'$ refers to some derivative of $x$ with respect to a given variable. It is often used in contexts where the derivative being taken is clear, for ease of notation. In the equation you list, for example, $x'(t) = x(t)$ is the same thing as writing $\frac{d}{dt}x(t) = x(t)$.
H: Prove this following inequality Show that, for all integers m > 1, $\frac {1}{2me}$ < $\frac {1}{e}$ - $(1-\frac{1}{m})^m$ < $\frac {1}{me}$ AI: Here's one part: We have $\left(1+\frac1{km}\right)^{km}\to e$ as $k\to\infty$, hence for $k$ big enough the error in $$ \left(1-\frac1m\right)^m\cdot e\approx\left(1-\frac1m\right)^m\left(1+\frac1{km}\right)^{km}= \left[\left(1-\frac1m\right)\left(1+\frac1{km}\right)^k\right]^m$$ becomes arbitrarily small. Using $(1+x)^n> 1+nx$ if $x>0,n\ge2$ (Bernoulli) twice, we get $$ \left[\left(1-\frac1m\right)\left(1+\frac1{km}\right)^k\right]^m> \left[\left(1-\frac1m\right)\left(1+\frac1{m}\right)\right]^m=\left(1-\frac1{m^2}\right)^m>1-\frac1m.$$ Thus $\left(1-\frac1m\right)^m\cdot e>1-\frac1m$, i.e. $$ \frac1e-\left(1-\frac1m\right)^m<\frac1{em}.$$
H: $\lim_{n\to\infty} a_n=a$ if and only if $\forall p\in \Bbb N$, $\lim_{n\to\infty} |a_{n+p}-a_n|=0$ I'm doing exercises. In the related book, there is a claim. Is this right? I'm not sure. For a sequence $\{a_n\}$, there exists a limit $a$ such that $\lim_{n\to\infty} a_n=a$ if and only if for any $p\in \Bbb N$, $\lim_{n\to\infty} |a_{n+p}-a_n|=0$ If not, could you kindly give some counterexamples? Thanks. AI: $a_n = \displaystyle\sum_{k=1}^{n}\frac{1}{k}$ diverges, and $\displaystyle\lim_{n\to \infty} \sum_{k=n+1}^{n+p}\frac{1}{k}=0$.
H: Proving equivalences between prime counting functions. If we have that: $$\theta(x)=\sum_{p\leq x}\log p,$$ and $$\psi(x)=\sum_{n\leq x}\Lambda(n)$$ Where $\Lambda(n)=\log p $ if $n=p^m$ and $\Lambda(n)=0$ in another case. How can I prove that : 1) $\theta(x)=\psi(x)+O(\sqrt{x})$ 2) $\pi(x)=\frac{\psi(x)}{\log x}+O(\frac{x}{\log^2x})$ Being $\pi(x)=\sum_{p\leq x}1$ the function that counts the number of primes less than $x$ AI: Given any $n\in\mathbb{N}$ note every prime $p$ with $n< p\leq 2n$ divides $(2n)!$, but since $(2n)!=(n)!^2\binom{2n}{n}$ this means $p$ divides $(n)!^2\binom{2n}{n}$ thus by Euclid's lemma, $p$ must divide $(n!)^2$ or $\binom{2n}{n}$. Yet if $p$ divides $(n!)^2$ then $p$ must divide $(n!)=1\times 2\times \cdots \times n$ which means $p$ must divide an integer $1\leq k\leq n$ however this is impossible because by assumption $n<p$ therefore $p$ can not divide $(n!)^2$ meaning it must divide $\binom{2n}{n}$. Thus proving every prime $p$ in the interval $(n,2n]$ divides $\binom{2n}{n}$ or equivalently that the integer $\prod_{n<p\leq 2n}p$ divides $\binom{2n}{n}$ which means $\prod_{n<p\leq 2n}p\leq \binom{2n}{n}\leq \binom{2n}{n}+\sum_{\substack{1\leq k\leq 2n\\k\neq n}}\binom{2n}{k}=4^n$ thus we get $\prod_{n<p\leq 2n}p\leq 4^n\implies \log(\prod_{n<p\leq 2n}p)\leq n\log(4)\implies \sum_{n<p\leq 2n}\log(p)\leq n\log(4)$ which proves for every $n\in\mathbb{N}$ that: $$\vartheta(2n)-\vartheta(n)=\left(\sum_{p\leq 2n}\log(p)\right)-\left(\sum_{p\leq n}\log(p)\right)=\sum_{n<p\leq 2n}\log(p)\leq n\log(4)$$ Thus for any $m,k\in\mathbb{N}$ if we set $n=m/2^k$ then this proves $\vartheta\left(\frac{m}{2^{k-1}}\right)-\vartheta\left(\frac{m}{2^{k}}\right)\leq \frac{m}{2^{k}}\log(4)$ which means we must have that: $$\small\vartheta\left(m\right)=\left[\vartheta\left(m\right)-\vartheta\left(\frac{m}{2}\right)\right]+\left[\vartheta\left(\frac{m}{2}\right)-\vartheta\left(\frac{m}{4}\right)\right]+\left[\vartheta\left(\frac{m}{4}\right)-\vartheta\left(\frac{m}{8}\right)\right]+\cdots \leq \sum_{k=1}^{\infty}\frac{m}{2^k}\log(4)=m\log(4)$$ Thus for any non-negative $x\in\mathbb{R}$ setting $m=\lfloor x\rfloor$ gives $\vartheta\left(x\right)=\vartheta\left(\lfloor x\rfloor\right)\leq \lfloor x\rfloor\log(4)\leq x\log(4)$ which means $\vartheta\left(x\right)=\mathcal{O}(x)$ therefore because $\small\psi(x)=\sum_{n\leq x}\Lambda(n)=\sum_{p\leq x}\lfloor \log_p(x)\rfloor\log(p)$ we can now write $\psi(x)=\sum_{j=1}^{\lfloor\log_2(x)\rfloor}\sum_{p^j\leq x}\log(p)=\sum_{j=1}^{\lfloor\log_2(x)\rfloor}\vartheta(x^{1/j})=\vartheta(x)+\vartheta(x^{1/2})+\sum_{j=3}^{\lfloor\log_2(x)\rfloor}\vartheta(x^{1/j})$ which finally proves that $\psi(x)=\vartheta(x)+\vartheta(x^{1/2})+\mathcal{O}(x^{1/3}\log(x))\implies \psi(x)=\vartheta(x)+\mathcal{O}(x^{1/2})$. Now by partial summation we have that: $$\small \frac{\psi(x)}{\ln(x)}+\mathcal{O}\left(\frac{x}{\log(x)^2}\right)=\frac{\psi(x)}{\ln(x)}+\mathcal{O}\left(\int_{2}^x\frac{1}{\ln(t)^2} dt\right)=\frac{\psi(x)}{\ln(x)}+\int_{2}^x\frac{\psi(t)}{t\ln(t)^2} dt=\sum_{1<n\leq x}\frac{\Lambda(n)}{\ln(n)}\\\implies \small \frac{\psi(x)}{\ln(x)}+\mathcal{O}\left(\frac{x}{\log(x)^2}\right)=\sum_{j=1}^{\lfloor\log_2(x)\rfloor}\frac{\pi(x^{1/j})}{j}\implies \small \pi(x)=\frac{\psi(x)}{\ln(x)}+\sum_{j=2}^{\lfloor\log_2(x)\rfloor}\frac{\pi(x^{1/j})}{j}+\mathcal{O}\left(\frac{x}{\log(x)^2}\right)\\\implies \pi(x)=\frac{\psi(x)}{\ln(x)}+\mathcal{O}\left(\frac{x}{\log(x)^2}\right)$$ As required.
H: kill rate of insecticide differential equations A field of wheat teeming with grasshoppers is dusted with an insecticide having a kill rate of 200 per 100 per hour.What percentage of the grasshoppers are still alive 1 hour later? I did not understand the units.What does 200 per 100 per hour means?Hence I could not solve the problem.answer is 13.53%. Please tell how. AI: It means exponential decay, $P(t)=P(0)e^{-2t}$. Fairly standard terminology, though a little peculiar. The $200$ per $100$ is $200\%$.
H: What mathematical objects permit "taking of limits"? Background I have been reading a lot of abstract algebra recently (at the level of Artin/Dummit & Foote/Herstein Topics in Algebra for those of you familiar with these books). I have noticed that many of the abstract objects like groups and rings lack a certain property in general. Namely, we cannot "take limits" in these settings generally (at least, I don't know how to define such a thing). Question What is the property of fields like the reals that allows us to "take a limit" that fields like $\mathbb{Z}/7\mathbb{Z}$ lack? AI: Many categories are complete or cocomplete, which means that you can take limits or colimits of diagrams in your category. Every category of algebraic structures of a given type is complete and cocomplete. For example: The ring of $p$-adic numbers $\mathbb{Z}_p$ is the limit of the rings $\mathbb{Z}/p^n$, where $n \geq 0$ The group $\mathbb{Q}/\mathbb{Z}$ is the colimit of the finite cyclic groups $\mathbb{Z}/n$, where $n>0$ w.r.t. to divisibility. If $E/K$ is a Galois extension, then it is the colimit of the finite Galois extensions $E'/K$ where $E' \subseteq E$, and for the corresponding Galois groups this implies that $\mathrm{Gal}(E/K)$ is the limit of the finite groups $\mathrm{Gal}(E'/K)$, thus it is a profinite group. If $R$ is a ring, the colimit of the groups $\mathrm{GL}_n(R)$, where the transition maps are $A \mapsto \mathrm{diag}(A,1)$, equals $\mathrm{GL}(R)$, the group of infinite matrices which are the identity up to finitely many entries. This group is important in K-theory. One can show that limits in topological spaces are special cases of limits in the sense of category theory. See MO/9951. But often one also wants to take limits of sequences (or nets, or filters) in your favorite algebraic object. This is possible for topological algebraic structures. The most important examples are topological groups, topological rings, Banach algebras, and C*-algebras. For example, we have $p^n \to 0$ in $\mathbb{Z}_p$, and $x^n \to 0$ in $\mathbb{Z}[[x]]$. For more information, see topological algebra and the references given there. Every set can be equipped with the discrete topology, which means that a sequence converges iff it becomes eventually constant. This is the usual topology one puts on finite groups.
H: Expressing $x^{2}-y^{2}-y + i \cdot (2xy+y)$ in terms of $z$. How to express the following expression $$x^{2}-y^{2}-y + i \cdot (2xy+y)$$ in terms of $z$. $x^{2}-y^{2} + 2xyi = (x+iy)^{2}=z^{2}$ but what about $iy -y$. AI: $$ x^2-y^2+i(2xy) +iy-y= z^2 + (i-1)\Im(z) =z^2 + \frac1{2i}(i-1)(z-z^*) $$
H: Is my row calculation of row echelon form correct? I was directed by a community member to a resource on how to calculate the row echelon form of a matrix here. The resource says: First we wish to put A into reduced row echelon form. There are several ways to do those (and thus several matrices P), but here is one possible way: (calculation next) So, I understand that a given matrix can have multiple row echelon forms. To continue my self-study of linear algebra further, I looked at the example on wikihow. It gives a simple 3x3 matrix and shows how to calculate the row echelon form. Fair enough. As in wikihow, the given matrix is: $$ \begin{bmatrix} 1 & 1 & 2 \\ 1 & 2 & 3 \\ 3 & 4 & 5 \end{bmatrix} $$ and the row echelon form is this: $$ \begin{bmatrix} 1 & 1 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & -2 \\ \end{bmatrix} $$ However, my answer is different and I am not sure if it is correct My calculation on the same matrix: Attempting to get all zeros under $A_1_1$ as: $R_2$ - $R_1$ -> $R_2$ $(3\times R_1$) - $R_3$ -> $R_3$ So, the matrix is: $$ \begin{bmatrix} 1 & 1 & 2 \\ 0 & 1 & 1 \\ 0 & -1 & 1 \\ \end{bmatrix} $$ Attempting to get all zeroes under $A_2_2$ as: $R_2$ + $R_3$ -> $R_3$ So the matrix is: $$ \begin{bmatrix} 1 & 1 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & 2 \\ \end{bmatrix} $$ The only difference is that I have a $2$ in bottom right and wikihow has $-2$. Is it correct ? AI: You're getting different answers because you're subtracting $R_3$ from $3 R_1$. Usually, you subtract $3 R_1$ from $R_3$ (you can add or subtract rows, but you're changing the sign on $R_3$ here: we often choose to do it this way because adding or subtracting a multiple of a row keeps the determinant the same). So, to get Wikihow's answer, you should have taken $R_3 - 3R_1 \to R_3$ instead. This also explains the sign flip, since $R_3 - 3R_1 = - ( 3R_1 - R_3 )$.
H: Proving that $\mathbb{Z}[i]$ is a noetherian ring Claim: the ring $\mathbb{Z}[i]$ is a noetherian ring My proof 1) $\mathbb{Z}[i]$ is a finitely generated $\mathbb{Z}$-module. 2) $\mathbb{Z}$ is a noetherian ring. 3) Every finitely generated module over a noetherian ring is a noetherian module, hence $\mathbb{Z}[i]$ is a noetherian $\mathbb{Z}$-module. 4) By definition of noetherian module, every $\mathbb{Z}$-submodule of $\mathbb{Z}[i]$ is finitely generated as a $\mathbb{Z}$-module 5) an ideal $\mathfrak{i}$ of $\mathbb{Z}[i]$ is in particular a $\mathbb{Z}$-submodule of $\mathbb{Z}[i]$ 6) $\mathfrak{i}=\mathbb{Z}x_1+\ldots +\mathbb{Z}x_n$ 7) since $\mathfrak{i}$ is finitely generated as a $\mathbb{Z}$-module, it is also finitely generated as an ideal Do you think my proof works? AI: Would you like another proof? By Hilbert's Theorem $\mathbb{Z}[X]$ is noetherian. Hence $\mathbb{Z}[i]$ is also noetherian as a factor-ring of $\mathbb{Z}[X]$. Addendum: In fact, here (for one unknown) Hilbert's theorem is not needed.
H: A smooth function instead of a piecewise function I want to find a smooth function approximating f(x) as best as possible: \begin{equation*} f(x) = \begin{cases} x & \text{if } x \le a,\\ a & \text{if } x > a. \end{cases} \end{equation*} as a smooth function ($a$ is a positive constants and x is a positive real number). $f(x)=\sqrt[n]{x}$ has a similar trend, but not good enough. What is best alternative function for the piecewise one. AI: A nice class of functions which "approximate" your continuous example is given by the sigmoids: http://en.wikipedia.org/wiki/Sigmoid_function Unfortunately your "best alternative function" concept is not well defined, as pointed out by Hagen von Eitzen. I would try (if this is your exact problem!) at least to work with a fixed "error" $a-g(a)$ at the point $a$, while searching for a smooth candidate $g$ approximating your function $f$.
H: What does the letter U mean in math? What does the letter U mean in the following expression: $$ \bigcup_{\alpha} A_\alpha \;? $$ It doesn't look like logical OR. Link to original screen shot. AI: It represents the union of all the sets $A_\alpha$. By the "union", what it basically means that: $$ a \in \bigcup_{\alpha} A_\alpha \iff \exists \alpha :a \in A_\alpha $$ or in words - it is the set of elements which are contained in any of the $A_\alpha$.
H: Maximum likelihood estimation - why is $\mathcal{L}$ not the joint pdf? Here's an excerpt from my notes: Define the likelihood function: $$\mathcal{L}(\vec{x};\theta)=\prod_{i=1}^{n} f(x_i;\theta)$$ Where $f$ is the pdf of the distribution we're sampling the $x$'s from. Caution: the likelihood function $\mathcal{L(\vec{x};\theta)}$ is not the same as the joint pdf $L(\vec{x};\theta)$ Could someone explain to me where this "caution" comes from? Is it purely because the samples were not assumed independent? AI: The likelihood function $\mathcal L$ is not the same as the joint pdf $L$ because they are functions of different variables. The likelihood function $\mathcal L$ is a function of the parameter $\theta$ for a fixed value of the observations $\vec{x}$ but $L$ is a function of the observations $\vec{x}$ for a fixed value of the parameter $\theta$. This is why I prefer to write $\mathcal{L}(\theta;\vec{x})$ instead of $\mathcal{L}(\vec{x};\theta)$. The general definition of the likelihood function when $\vec{x}$ admits a joint pdf is actually in terms of the joint pdf: $$ \mathcal{L}(\theta;\vec{x}):= L(\vec{x};\theta) $$ but in case of independence this simplies to $\prod\limits_{i=1}^n f(x_i;\theta)$.
H: Base change and ordinals Problem. Define the operation base change from $k$ to $m$: to make the operation for natural number $n$ we should write $n$ in the base-$k$ numeral system and read this in the base-$m$ numeral system. Let $n$ be a natural number. Make for $n$ base change from $2$ to $3$, then subtract $1$, then change base from $3$ to $4$, then subtract $1$ and so on. Prove that we'll get zero after finitely many steps. For example, for $n=4$ the sequence is $$9,8,10,9,11,10,12,11,12,11,12,11,12,11,12,11,12,11,\\ 12,11,11,10,10,9,9,8,8,7,7,6,6,5,5,4,4,3,3,2,2,1,1,0.$$ For $n=6$ the sequence contain 762 terms. The problem has elementary proofs, but it's from a book on Set Theory and authors says that it has a solution using ordinals: change all bases to $\omega$ and get decreasing sequence of ordinals. I can't understand this hint. AI: Suppose at the base $n$ stage you have the representation $a_ka_{k-1}\dots a_1a_0$, representing $$g_n=a_kn^k+a_{k-1}n^{k-1}+\ldots+a_1n+a_0\;,$$ where $a_0,\dots,a_k\in\{0,\dots,n-1\}$, and $a_k\ne 0$. Let $$\hat g_n=\omega^k\cdot a_k+\omega^{k-1}\cdot a_{k-1}+\ldots+\omega\cdot a_1+a_0\;,$$ where the arithmetic is all ordinal arithmetic. The change of base from $n$ to $n+1$ changes the interpretation of $a_ka_{k-1}\dots a_1a_0$ to $$a_k(n+1)^k+a_{k-1}n^{k-1}+\ldots+a_1(n+1)+a_0\;.\tag{1}$$ If $a_0\ne 0$, subtracting $1$ yields $$g_{n+1}=a_k(n+1)^k+a_{k-1}n^{k-1}+\ldots+a_1(n+1)+(a_0-1)\;,$$ and $$\hat g_{n+1}=\omega^k\cdot a_k+\omega^{k-1}\cdot a_{k-1}+\ldots+\omega\cdot a_1+(a_0-1)<\hat g_n\;.$$ In general suppose that $a_i$ is the rightmost non-zero coefficient/digit. Then subtracting $1$ from $(1)$ yields $$g_{n+1}=a_k(n+1)^k+\ldots+(a_i-1)(n+1)^i+\underbrace{n(n+1)^{i-1}+\ldots+n(n+1)+n}_{\text{all coefficients }=n}\;,$$ and $$\hat g_{n+1}=\omega^k\cdot a_k+\omega^{k-1}\cdot a_{k-1}+\ldots+\omega^i\cdot(a_i-1)+\underbrace{\omega^{i-1}\cdot n+\ldots\omega\cdot n+n}_{\text{all coefficients }=n}\;.$$ This is again less than $\hat g_n$, since $$\omega^i\cdot a_i>\omega^i\cdot(a_i-1)+\underbrace{\omega^{i-1}\cdot n+\ldots\omega\cdot n+n}_{\text{all coefficients }=n}\;:$$ this follows from the (possibly more evident) fact that $$\omega^i>\underbrace{\omega^{i-1}\cdot n+\ldots\omega\cdot n+n}_{\text{all coefficients }=n}\;,$$ since $\omega^i\cdot a_i=\omega^i\cdot(a_i-1)+\omega^i$. Thus, the sequence of ordinals $\hat g_n$ is strictly decreasing and must terminate at $0$ in finitely many steps. But $\hat g_n=0$ if and only if $g_n=0$, so the same is true of the sequence of numbers $g_n$.
H: Proof that $\sum\limits_{n=1}^{\infty} z^{1/n}$ does not converge I believe I found a proof for the divergence of this sum for any value of $z$ besides 0. We can look on the telescopic series: $$\sum_{n=1}^{\infty}z^{1/(n+1)}-z^{1/n} = \lim_{N\rightarrow \infty} \left(z^{1/(N+1)}-z\right) = 1-z$$ If the sum in the title were convergent, as in equals, S, then we should have: $$ S-z - S = 1-z \Rightarrow 1=0$$ So from this contradcition it follows that the series is divergent. Is there any meaningful thing to say on this series? perhaps asymptotic analysis on this? Thanks in advance. AI: The reasoning in your post is based on (and partially rediscovering) the fact that if $x_n\to x\ne0$ then the series $\sum\limits_nx_n$ diverges. A more direct approach to this result is to note that if $\sum\limits_nx_n$ converges then $\sum\limits_{n\leqslant N}x_n$ and $\sum\limits_{n\leqslant N-1}x_n$ both converge to a same limit $\ell$ hence their difference $x_N$ converges to $\ell-\ell=0$. In your context, $x_n=z^{1/n}$ hence $x_n\to x=1\ne0$ for every $z\ne0$ hence the series $\sum\limits_nz^{1/n}$ diverges. Asymptotics of $x_n$ when $n\to\infty$ are $x_n=1+\frac1n\log z+O\left(\frac1{n^2}\right)$ hence $$ \sum\limits_{1\leqslant n\leqslant N}z^{1/n}=N+\log N\cdot\log z+O(1). $$ Even more precisely, the sequence of general term $\left(\sum\limits_{1\leqslant n\leqslant N}z^{1/n}\right)-N-\log N\cdot\log z$ converges to a finite limit.
H: How many 8-character passwords are possible using one of each of three types of character possible? How many eight character passwords are there if each character is either an uppercase letter A-Z, a lowercase letter a-z, or a digit 0-9, and where at least one character of each of the three types is used? I believe this question is about using the technique of 'counting the complement'. The complement of "one character of each" is "either uppercase, lowercase, or digits is NOT used". Total possible passwords : $62^8$ Total possible passwords w/ out uppercase letters: $36^8$ Total possible passwords w/ out lowercase letters: $36^8$ Total possible passwords w/ out digits : $52^8$ I think my trouble is from fully understanding what the 'complement' means. From what I understand it is (very loose explanation) the opposite of the restriction placed on our original set. So in our case the original set is $62^8$ and the complement of the restriction placed on that set, one of each character, is all the possibilities where types of characters are not there. Which are what I typed above this paragraph. However, I feel like I am missing something. Otherwise I would say that the answer is: $62^8 - [36^8 + 36^8 + 52^8]$. Main problem: Not understanding how to compute the complement. AI: As Matt commented, we are using the principle of inclusion-exclusion. Note that your expression so far: $$62^8 - [36^8 + 36^8 + 52^8]$$ is actually too small; you oversubtracted. This is because certain passwords were double counted. For example, consider the illegal password: $ABCDEFGH$. Certainly, this is a possible password (it belongs to the set of size $62^8$). However, it contains no lowercase letters (it belongs to one of the sets of size $36^8$), so it is subtracted. However, notice that it contains no digits (it belongs to the set of size $52^8$), so it is subtracted a second time. This is bad! This illegal password is being counted a total of $1-2=-1$ times! To compensate, we must add some back in. Note that: Total possible passwords without uppercase AND without digits: $26^8$ Total possible passwords without lowercase AND without digits: $26^8$ Total possible passwords without uppercase AND without lowercase: $10^8$ Returning to our example illegal password $ABCDEFGH$, notice that it contains no lowercase AND no digits (it belongs to one of the sets of size $26^8$), so it is added back in once. This is good! This illegal password is now counted a total of $1-2+1=0$ times in our final answer. Hence, our total should actually be: $$62^8 - [36^8 + 36^8 + 52^8] + [26^8 + 26^8 + 10^8] = 159655911367680$$ EDIT: As Goos mentioned, we would normally continue this pattern by subtracting out the passwords without uppercase AND without lowercase AND without digits. In this case, however, this set has size $0^8$, so this doesn't affect the answer.
H: Looking for a way to find the proportional growth rate in time for any given notation I am wondering if there is a straight forward way to illustrate the proportional growth rate in time (or space) for any given notation such as $O(n^2)$ or $O(logn)$? My initial thought is that $O(n^2)$ would be equal to $O(n)*O(n)$ but I'm sure that's completely wrong. AI: We say that $f(n)=O(g(n))$ iff: There exist constants $c,n_0$ such that for all $n\ge n_0$, if $c>0$, then $0 \le f(n) \le c \cdot g(n)$. Also, recall that $y$ is proportional to $x$ simply if $y=kx$ for some constant $k$. Thus, suppose some algorithm had a running time of $f(n)=O(n^3)$. Then we can conclude that this algorithm's (time or space) growth rate is proportional to $n^3$. For example, it might be the case that $f(n)=4n^3$ or $f(n)=50n^3$.
H: Calculating rotation matrix of coordinate system from 2 known axis In the image my main coordinate system is in the upper right corner. I measured $3$ points on a board and created a help coordinate system. V1 points directly to the origin of the help coordinate system. V2 and V3 lie on different axis of the help coordinate system. I want to find the transformation matrix. So V1 is my translation, right? Subtracting V1 from V2 and V3 I can move the origin of the help coordinate system to the origin of the main coordinate system, right? But how do I calculate the rotation? AI: You can construct a set of three orthogonal vectors from the following: $$u_1 = v_2 - v_1$$ $$u_3 = u_1 \times (v_3 - v_1)$$ $$u_2 = u_3 \times u_1$$ Once you normalize all three of these vectors you can construct the rotation matrix as follows: $$\mathbf R = [u_1\ u_2 \ u_3]$$ This is what I have referred to as a triad, although I am not sure how common that terminology is.
H: Find a function that gives this sequence: $+1,+1,-1,+1,+1,-1,-1,+1,+1,+1,-1,-1,+1,-1,-1,...$ I start with a string $S_1=1$ then the $(n+1)$-th string is $S_{n+1}=\{ S_n,+1 ,-(S_n)\}$ if $S_j=\{s_1,s_2,s_3,..., s_i\}$ then $-(S_j)$ is defined as $-(S_j)=\{-(s_i), -(s_{i-1}),..., -(s_3), -(s_2), -(s_1) \}$ The sequence is $[(+1_1,+1_2,-1_3),+1_4,(+1_5,-1_6,-1_7)],+1_8,[(+1_9,+1_{10},-1_{11}),-1_{12},(+1_{13},-1_{14},-1_{15})],...$ I can't find it on internet. The properties that I noticed are these: if $\beta(n)=\beta_n$ is the $n$-th number of the sequence we have that $\beta(2^n -1)=\beta(M_n)=-1$ where $M_n$ is the $n$-th Mesenenne number $\beta(2^n)=+1$ $\beta(2^n +1)=+1$ if $2^n \lt j \lt 2^{n+1}$ then $\beta_j=-\beta(2^{n+1}-j)=-\beta(2^n+2^{n+1}-j)$ and $\forall m\in \Bbb N$ $\beta(2^m+n2^{m+1})= \begin{cases} +1, & \text{if $n$ is even} \\ -1, & \text{if $n$ is odd } \\ \end{cases}$ Questions 1 - There is a general formula? Have it a name? 2 - Is this sequence (and the forumula) usefull in any fields of mathematics? Edit: I made a mistake writing the sequence, now is fixed, and I added how I build the sequence the sequece: AI: I assume the sequence is $a_n$ with first six terms beginning with $a_1$ being $1,1,-1,1,-1,-1$, and also $a_{n+6}=a_n.$ Note that the nonzero squares mod $7$ are $1,2,4$, and that this is precisely where in the first six terms we have $a_n=1$. Based on that we can come up with a formula for $a_n$ in two stages. First, define $$f(n) = mod(\ [\ mod(n-1,6)+1\ ]^3, \ 7),$$ and note that the sequence $f(1),f(2),...$ is $1,1,6,1,6,6,1,1,6,1,6,6,...$ and repeats mod 6. (The inner mod 6 serves to place $n$ into one of the positions 1 through 6, and the outer mod of the cube mode 7 is from Gauss' lemma that the quadratic character of $a$ mod p is congruent to $a^{(p-1)/2}$ mod p. Since I'm assuming the mod function outputs the least nonnegatative residue, the sequence of $f(n)$ does what is required. In order to get rid of the 6's and convert them to $-1$, we may finally define $$a_n=(-1)^{1+f(n)}.$$ ADDED: The OP has now given a different definition of the desired sequence, so the above doesn't match it now. I'll try for a formula for that... NOTE: Yet another adjustment has been made to the definition. Here's what I think is happening with the new sequence. The condition that $S_{n+1}=S_n,1,-(S_n)$ where (this is the latest adjustment) $(-S_n)$ is obtained from $S_n$ by reversing its order and changing the signs, may be reformulated by saying that $a(2^n)=1$, (this is where the central 1's wind up), and that, for $1 \le k \le 2^n-1$ we have $a(2^n+k)=-a(2^n-k)$. That may lead to a formula... EUREKA The sequence is $(-1/n)$ and is the Jacobi or Kronecker symbol. It is in OEIS as sequence number A034947, and at that page is the same method of generation you give, along with the fact that it is a multiplicative function and other information. For example $a(2n)=a(n)$ and $a(4n+1)=1,\ a(4n+3)=-1$. I think the page more than covers formulas for the $n$th term etc. Computational note: The computation of $a(n)$ can be done in two steps. First express $n=2^k \cdot u$ where $u$ is odd. (This expression is unique.). Then $$a(n)=(-1)^{(u-1)/2}.$$ To put this last another way, if $u=1 \mod 4$ return $+1$, else return $-1$. Just for fun: Let $lg(x)=\ln(x)/\ln(2)$, the log base 2. Then let $g(x)=ceiling(lg(x)),$ and define $$r(n)=\frac{n}{\gcd(2^{g(n)},n)}.$$ Then $r(n)$ is the $r$ used in the computation of $a(n)=(-1)^{(r-1)/2}.$
H: For what values of $a$ the equation $(ax)^2-x^4=e^{|x|}$ have no solution I am trying to find for what values of $a$ this equation have no solution. the condition is $|a|<\sqrt{2}$ and the equation: $$(ax)^2-x^4=e^{|x|}$$ what I did so far is set ln on this equation: $$2ln(ax)-4ln(x)=|x|$$ I would like to get some advice. Thanks AI: It is enough to consider nonnegative $x$, so the equation reads $$ a^2x^2-x^4=e^x,\qquad x\ge0.$$ The left hand side is $=\frac14a^4-(x^2-\frac12a^2)^2\le\frac14a^4$, the right hand side $\ge e^0=1$, hence if $|a|<\sqrt 2$, we are done.
H: Symmetric matrix and inner product: $\langle Ah,x\rangle = \langle h,A^T x\rangle =\langle Ax,h\rangle$ If A is real, symmetric, regular, positive definite matrix in $R^{n.n}$ and $x,h\in R^n$, why is it $\langle Ah,x\rangle = \langle h,A^T x\rangle =\langle Ax,h\rangle$? Is there some rule or theorem for this? AI: Note that inner product can be written as: $\langle x,y\rangle=y^Tx$. So $\langle Ah, x\rangle=x^T Ah$ and $\langle h, A^T x\rangle=(A^Tx)^Th=(x^TA)h=\langle Ah,x\rangle.$ Also $A^T=A$ as $A$ is symmetric and this gives the last equality.
H: Source coding theorem - optimum number of bits? The source coding theorem says that information transfer with variable length code uses less bits and is equal to the entropy of the distribution. It also says that there is no code that uses lesser number of bits ( or does it? Have I misunderstood this). My question is, there is obviously a way to transmit the same information using lesser number of bits than the entropy. For example: Let $P(x) = \frac{1}{2}, \frac{1}{4}, \frac{1}{16}, \frac{1}{32}, \frac{1}{64},\frac{1}{64}$ respectively for x= 1,2,3,4,5,6. If by using source coding theorem, a variable code of length = $log_2(p(x))$ is used, then the average bits used is $1*\frac{1}{2} + 2*\frac{1}{4} + 4*\frac{1}{16} + 5*\frac{1}{32} + 6*\frac{1}{64} + 6*\frac{1}{64}$ = 1.5938. But obviously, there is a better way of doing this: Just use the following codes: 0,1,10,11,100,101. This definitely uses less bits on average ( bits $\lt 1.5938$). So what is the actual meaning of the source coding theorem? AI: The source coding theorem restricts to uniquely decodable codes, which in practice are the only useful codes. To see that your code is not uniquely decodable, and hence practically useless, imagine that you receive the sequence $01$: you cannot know it that corresponds to a concatenation of $0|1$ (your two first symbols), or $01$ (the third symbol). Usually, one restricts even more the set of acceptable codes, by restricting to prefix codes. It can be shown that doing so one loses nothing (in terms of average length), and prefix codes are more practical to decode and more easy to analyze.
H: What's the term for a value x that satisfies the constraint $f(x) = f$ for a function f? I know that $x$ is called the fixed point of a function $f$ if it satisfies the constraint $f(x) = x$. However, for a function $f$ if there exists some value $x$ such that $f(x) = f$ then what is the term for the value $x$ with respect to $f$. Consider the following function in JavaScript: var bind = Function.prototype.bind; var bindable = bind.bind(bind); Now bindable is the function $f$ that I'm talking about. It satisfies the constraint $f(x) = f$ for the value bind: bindable(bind) = bindable; I know that I shouldn't express mathematics in terms of programming but I didn't know any other way to put it. Consider that bind has the following type definition: (a b -> c) a -> (b -> c) It takes a function of type a b -> c and zero or more arguments which grouped together have the type a and returns another function of type b -> c where b has the type of the rest of the arguments grouped together. Hence bind(bind) has the following type definition: (a b -> c) -> (a -> (b -> c)) Let bindable = bind(bind), thenbindable(bind) also has the same type definition: (a b -> c) -> (a -> (b -> c)) i.e. bindable(bind) = bindable. AI: I doubt that there is a name for this. If there is, it would be something from lambda calculus. Unlike fixed points, where $f(x)=x$, I'm not sure what the value would be in such a concept. There is a sense in which $f(x)=f$ is also a fixed point. Namely, if $D=\lambda y.\lambda g. g(y)$, then $f$ is a fixed point of $Dx$. But I'm not sure if that really gives you anything. You might consider asking this question at the StackExchange site dedicate to computer science questions.
H: Integrals using Arctangens We want to find $\displaystyle \int\dfrac{12}{16x^2 +1}$ I rewrote it to the form $ 3 \cdot \dfrac{1}{u^2 + 1} \cdot u' $ where $u=4x$. I found out that the correction sheet does the same thing, but their next step leaves my puzzled: $$ F(x) = 3 \arctan (4x) + C$$ Where did the $u' = 4$ go to? AI: As you noted, if $u = 4x$, then $du = 4dx$, yielding $$\begin{split} \int \frac{12dx}{16x^2+1} &= 3 \int \frac{4dx}{(4x)^2+1} \\ &= 3 \int \frac{u'dx}{(u)^2+1} \\ &= 3 \int \frac{du}{u^2+1} \\ &= 3 \arctan(u) + C \\ &= 3 \arctan(4x)+C \end{split} $$
H: Continuity of function given as a maximum Let $f(x,y)$ is continuous in $[a,b]\times[c,d]$, and we define the function $g(y)$ as follows $$g(y):=\max_{x\in[a,b]}f(x,y),\quad\forall y\in[c,d].\tag{1}$$ The question is when we can conclude that $g\in C[c,d]$, or provide a counterexample to show $g$ is not a continuous function again? Any answer will be appreciated! AI: We first note that $f$ is uniformly continuous, because it is a continuous functions defined in a compact set. This implies that for given $\alpha\in [c,d]$ and $\epsilon>0$, we can find some $\delta>0$ such that $$\tag{1}f(x,\alpha)-\epsilon\leq f(x,y)\leq f(x,\alpha)+\epsilon,\ \forall\ x\in [a,b]$$ We conclude from $(1)$ that $$\tag{2}g(y)\geq\max_{x} \{f(x,\alpha)-\epsilon\}$$ On the other hand $$\tag{3}f(x,y)\leq f(x,\alpha)+\epsilon\leq g(\alpha)+\epsilon $$ Hence, from $(3)$, we have that $$\tag{4}g(y)\leq g(\alpha)+\epsilon$$ By combining $(2)$ and $(4)$ we get the result.
H: When I was said to factorize a polynomial in school what did I actually told to perform? In school we spend several hours factorizing polynomials. But now as I've started gainnning some knowledge on polynomial rings, it suddenly occurred to me that none of the books I practiced then, suggested the factorized form of the polynomial $x^2-4$ as $$(cx+2c)\left(\frac{1}{c}x-\frac{2}{c}\right)$$ for arbitrarily chosen $c\ne0$ even though both of $(cx+2c),\left(\frac{1}{c}x-\frac{2}{c}\right)\in\mathbb R[x].$ They always took the form for $c=1$ as the answer. Why is it so? When I was said to factorize a polynomial in school what did I actually told to perform? Added: Due to Wikipedia, "the aim of factoring is usually to reduce something to “basic building blocks”, such as numbers to prime numbers, or polynomials to irreducible polynomials." Well here $(cx+2c),\left(\frac{1}{c}x-\frac{2}{c}\right)\in\mathbb R[x]$ are irreducible. However for $6,-3$ is not a prime factor. AI: It's the same reason that when asked to factor the integer $6$, you never see people give $(-2) \cdot (-3)$ as the answer. In many settings where factorization makes sense, there are a variety of reasonable choices for the factors, but there is a standard way to normalize the choice. When factoring integers, we usually choose the positive representative of each prime (i.e. $2$ rather than $-2$). When factoring polynomials over a field, we usually choose the factors to be monic. When factoring polynomials over the integers, we usually choose the leading coefficient of each factor to be positive. e.g. the factorization of $2 - 8x^2$ over the rationals would usually be written as $(-8)(x-1/2)(x+1/2)$, but its factorization over the integers would usually be written as $(-1)2(2x-1)(2x+1)$. (actually, we would probably write $-2$ rather than $(-1) 2$, but I wanted to emphasize that $-2$ here is meant to be the product of a sign and the factor $2$, rather to be the number $-2$)
H: How do I show that a set is an element of a set in a Venn diagram? More precisely, is there a difference between {p,s,r,q,t} and {{p,s},r,q,t} and if so, how would you show it using a Venn diagram? I have in my notes a Venn diagram where p, s, r, q and t are all obvious elements of set A. C is a subset of A, fully enclosed. Is C also an element of A? Can I state {C, r, q, t} is a subset of A? The question on the next page is: {{p, s}, r, q, t} is a subset of A. The answer given is false, but I thought it was true. AI: In terms of this style of diagram, you should think of $\{p,q\}$ as being a dot completely separate from the dots for $p$ and $q$; this is a little counterintuitive, but as the set $\{p,q\}$ is different from both the element $p$ and the element $q$, it needs to be drawn as a separate element. Then you can draw circles to represent the sets $\{p,s,r,q,t\}$ and $\{\{p,s\},r,q,t\}$ so that both contain the dots for $r$, $q$ and $t$, but only one contains the dots for $p$ and $s$, and only the other contains the dot for $\{p,s\}$. Of course, you might also want to draw a circle around $p$ and $q$ to represent $\{p,q\}$, but this has to be somehow unrelated to the dot - thinking of $\{p,q\}$ as a set is different to thinking of it as an element of other sets for the purposes of this diagram. (For this reason, I don't think Venn diagram notation is particularly good at dealing with these kinds of questions, because you're forced to think of the same thing in two very different ways simultaneously). This should also answer your other question - $C$ being a subset of $A$ is not the same thing as $C$ being an element of $A$.
H: Need help with $\int \dfrac{2x}{4x^2+1}$ We want$$\int \dfrac{2x}{4x^2+1}$$ I only know that $\ln(4x^2 + 1)$ would have to be in the mix, but what am I supposed to do with the $2x$ in the numerator? AI: Hint: make the substitution $u = 4x^{2} +1$. Then $du = 8xdx$, and the integral becomes: $$\frac{1}{4} \int \frac{8xdx}{4x^{2}+1} = \frac{1}{4} \int \frac{du}{u}$$
H: Differentiation of inverse functions using graphs with conditions? I was trying to differentiate this equation. And I got the answer but it matches none. Any help on how to solve this one. I tried by converting this function to $y=tan^{-1}tan{\frac{x}2} $ and then using condition to differentiate but my answer is not coming. Please help me solve this.The correct answer is option (b) Second question is I am startled about how can value of $ cos{\frac{x}2} $ & $ sin{\frac{x}2} $ affect the differentiation. AI: As you did before, by writing the fraction with respect to $k=\tan (x/2)$, we have $$y=\frac{\sqrt{1+2k/(1+k^2)}+\sqrt{1-2k/(1+k^2)}}{\sqrt{1+2k/(1+k^2)}-\sqrt{1-2k/(1+k^2)}}=\frac{|k+1|+|k-1|}{|k+1|-|k-1|}$$ So, if $k-1>0$ which is equivalent to $\sin(x/2)>\cos(x/2)$ we get $$y=\frac{k+1+k-1}{k+1-k+1}=k=\tan(x/2)$$ Now think about this latter one.
H: Two roots of polynomial If a polynomial with rational coefficients has a root $1 + \cos(2\pi/9) + \cos^2(2\pi/9)$, then the one also has a root $1+\cos(8\pi/9)+\cos^2(8\pi/9).$ How to prove it? AI: Take $\omega=e^{i2\pi/9}$. Let $\alpha=1 + \cos(2\pi/9) + \cos^2(2\pi/9)$. Then $4\alpha = 6+2\omega+ \omega^2 + \omega^7 + 2\omega^8$, by using $2\cos(2\pi/9)=\omega+\bar\omega$ and $\omega^9=1$. Let $\beta = 1+\cos(8\pi/9)+\cos^2(8\pi/9)$. Then $4\beta = 6 + \omega+ + 2\omega^4 + 2\omega^5 + \omega^8$, by using $2\cos(8\pi/9)=\omega^4+\bar\omega^4$. Now note that the map $\omega \mapsto \omega^4$ on $\mathbb Q(\omega)$ sends $\alpha$ to $\beta$ and the map $\omega \mapsto \omega^5$ sends $\beta$ to $\alpha$. This means that for $p$ a polynomial with rational coefficients, $p(\alpha)=0$ iff $p(\beta)=0$, as required. Alternatively, you can also conclude that $\alpha$ and $\beta$ are conjugate algebraic numbers and your result follows because their minimal polynomial is the same.
H: Proving that if $N<10^{30}$ then $\sum_{n=1}^{N}\frac{1}{n}<101.$ So, I am asked to prove if $N<10^{30}$ then $$\sum_{n=1}^{N}\frac{1}{n}<101.$$ I am given the information that $2^{10}=1024$ and in the previous part of the question I proved that $$0\leqslant \sum_{n=1}^{N}\frac{1}{n}-\ln N\leqslant 1.$$ So I reasoned as follows, $$\sum_{n=1}^N\frac{1}{n}<1+\ln N$$and so we want $1+\ln N < 101$ which means $$\ln N < 100\Rightarrow N<e^{100}=(2+1/2 + 1/6 + \cdots )^{100}\approx 2^{100}=(2^{10})^{10}=(1024)^{10}\approx (10^3)^{10}=10^{30}.$$ Thus, $N<10^{30}$. My main concern is over the approximation sign, which should in fact be a strictly greater sign, which then would mean that even though $\ln N<100$ this does not necessarily mean that $N<10^{30}$ which is what I have to show. So, would this proof be acceptable, or is there any other way to approach the problem? I am mostly looking for hints. Thanks. AI: You are right to be worried-the greater than sign is in the wrong direction for what you want to prove. Your implications also go the wrong way. You are correct that you want $\ln N \lt 100$, but you should be finding something that implies this, not something that it implies. Do you have an upper bound for $\ln 2$ or $\ln 10?$ If you know $\ln 10 \lt 2.31$ you are home.
H: Nontrivial solution What's the trick to find the real numbers $ \lambda $ for which the following equation system has a nontrivial solution ? $x_1 + x_5 = \lambda x_1 $ $x_1 + x_3 = \lambda x_2 $ $x_2 + x_4 = \lambda x_3 $ $x_3 + x_5 = \lambda x_4 $ $x_1 + x_4 = \lambda x_5 $ AI: The system can be written in matrix form as $Ax=0$, where $$ A= \begin{bmatrix} 1-\lambda & 0 & 0 & 0 & 1 \\ 1 & -\lambda & 1 & 0 & 0 \\ 0 & 1 & -\lambda & 1 & 0 \\ 0 & 0 & 1 & -\lambda & 1 \\ 1 & 0 & 0 & 1 & -\lambda \end{bmatrix} $$ A homogeneous system has a non trivial solution if and only if the determinant of the matrix is $0$. Developing the determinant with respect to the first row we get $$ \det A= (1-\lambda)\det \begin{bmatrix} -\lambda & 1 & 0 & 0 \\ 1 & -\lambda & 1 & 0 \\ 0 & 1 & -\lambda & 1 \\ 0 & 0 & 1 & -\lambda \end{bmatrix} +\det \begin{bmatrix} 1 & -\lambda & 1 & 0 \\ 0 & 1 & -\lambda & 1 \\ 0 & 0 & 1 & -\lambda \\ 1 & 0 & 0 & 1 \end{bmatrix} $$ Continue the development; you'll find a fifth degree polynomial in $\lambda$, the roots of which answer your question.
H: How to prove (global) uniqueness of solution to linear, first order ODE? Consider the first order linear ODE $F(x) + A x F'(x) + B = 0$, with initial condition $F(1) = 1$. Moreover, let $A,B \neq 0$, real, and $sgn(A) = sgn(B)$.* *... if the latter matters. Mathematica gives me the following solution, which is easy to verify: $F(x) = (1+B)(x^{-1/A}) - B$. My question is: is there any elementary way or theorem to establish that the above must be the unique global solution to the given ODE? Do I have to apply the Picard–Lindelöf theorem (if that works), or is there an easier argument? I have not much knowledge about how to solve differential equations; the question arises from an applied economics paper I'm working on. AI: You can't expect "global" uniqueness (for all real $x$). The reason is that the differential equation provides no information about $F'(0)$ and thus the portions for $x > 0$ and for $x < 0$ "don't know about each other". Here's how to prove uniqueness on $(0, \infty)$.Suppose you have two solutions $F$ and $G$ of the same problem. Set $f(x) = F(x) - G(x)$ for $x > 0$, then $Axf'(x) + f(x) = 0$ and $f(1) = 0$. Now set $g(x) = x^{1/A}f(x)$. Then you can check that $g'(x) = 0$ for $x > 0$ and $g(1) = 0$. Therefore $g(x) = 0$ for all $x >0 $ and therefore $F(x) = G(x)$ for all $x > 0$. Here is an example for two solutions that agree for $x > 0$ but are different for $x < 0$. Consider the two functions $$ F(x) = \begin{cases} (1+B)x^{-1/A} - B \quad (x > 0)\\ B x^{-1/A} - B \quad (x < 0) \end{cases} $$ and $$ G(x) = (1+B)x^{-1/A} - B \quad (x \ne 0) \, . $$ Set $F(0) = G(0) = -B$. Both functions satisfy the differential equation and the condition $F(1) = 1$, but they differ for $x < 0$.
H: Nonstandard structure of Presburger arithmetic Let $\mathfrak {R}_A = (\Bbb {N}; 0, S,<,+)$. What can we say about ${}^{\ast}\Bbb N$, the universe of non-standard structure of the first order theory of $\mathfrak {R}_A$? Firstly, because of the property every non-zero element must have a predessor and a successor, ${}^{\ast}\Bbb N$ must be a union of $\Bbb N$ and multiple $\Bbb Z$-chain which is ${}^{\ast}\Bbb N = \Bbb{N} \cup (A \times \Bbb {Z})$. So the problem reduces to what should the set $A$ be like. Secondly since for every element either itself or its successor must be some element's double, $A$ must dense ordered set without a supremum and a infimum. But there's still a lot of candidates for $A$, say $\Bbb R$, $(0,1)$, $\Bbb Q$. Can we sharpen this result? AI: What is known is that if the model is countable, then $A=\mathbb Q$. Since the ultrapower construction does not produce a countable model, $^\star \mathbb N$ must be more complicated. It is probably hellishly complicated. A good reference for this is Bovykin, Andrey; Kaye, Richard Order-types of models of Peano arithmetic. Logic and algebra, 275–285, Contemp. Math., 302, Amer. Math. Soc., Providence, RI, 2002. A review may be found at http://www.ams.org/mathscinet-getitem?mr=1928396
H: if $X$ is a vector field how can I find $Y$ such that $[X,Y]=0$? Suppose I am given a holomorphic vector field $X$ over a complex manifold $M$. To simplify this we can suppose that $X$ is a holomorphic vector field in $\mathbb{C}^n$ for $n=2$ or $n=3$. How can I determine another vector field (non-colinear with $X$) such that their Lie Bracker $[X,Y]$=0? I am trying to do this without applying the vector field $[X,Y]$ to an arbitrary $f$ and working with coordinates and a lot of derivatives. Should this problem be too complicated (and I think it is) we should probably stick with polynomial vector fields (or even with the homogeneous polynomials ones, although these have already been classificated here: link in dimension two). I am hopeful that there is something done in this sense in some Lie Group theory literature before. Any help is appreciated. AI: Gustavo, this probably doesn't make you happy, but you need a vector field that's invariant under the flow of $X$. So pick an arbitrary hyperplane $H$ transverse to $X_0=X(0)\in\mathbb C^n$, and let $Y_0\in \mathbb C^n$ be arbitrary. Set $Y(p)=Y_0$ at points $p\in H$. Let $\phi_t$ be the $X$-flow, and take $Y(\phi_t(p)) = (\phi_t)_{*p}Y_0$. Does this work?
H: Show that a group homomorphism $f$ is the identity. Suppose that $f$ is a group homomorphism from $\mathbb Z_7\times\mathbb Z_7$ to itself satisfying $f^5 = \operatorname{id}$ (where $f^5=f\circ f\circ f\circ f\circ f$). Show that $f$ is the identity. AI: Hint: In a group $G$ (what group? Its order divides $42\cdot 48$) if $a\in G$ then $a^{|G|}=\mathrm{id}$. If $a^k=\mathrm{id}$ and $a^n=\mathrm{id}$ then $a^{\gcd(k,n)}=\mathrm{id}$. Can you apply the above to your problem? What if $f^3=\mathrm{id}$ or $f^{11}=\mathrm{id}$?
H: Evaluate the integral $\int_{0}^{\infty} \frac{1}{(1+x^2)\cosh{(ax)}}dx$ The problem is : Evaluate the integral $$\int_{0}^{\infty} \frac{1}{(1+x^2)\cosh{(ax)}}dx$$ I have tried expand $\frac{1}{\cosh{ax}}$ and give the result in the following way: First, note that $$\frac{1}{\cosh{(ax)}}=\frac{2e^{-ax}}{e^{-2ax}+1}=\sum_{n=0}^{\infty}2(-1)^n e^{-(2n+1)ax}$$ Secondly, we consider $f(a)=\int_{0}^{\infty} \frac{e^{ax}}{1+x^2}dx$ Some calculation results in $f''(a)+f(a)=\int_{0}^{\infty}e^{ax}dx=-\frac{1}{a}$ We substitute $f(a)=u(a)e^{ia}$ into former result and thus $ (u'(a) e^{2ia})'=-\frac{e^{ia}}{a}.$ Let $E(a)=\int_{0}^{a} \frac{e^{it}}{t}dt=\mbox{Ei}(ia)$ where $\mbox{Ei}(x)$ is the Exponential integral then $$u'(a)= -e^{-2ia} E(a)+c_1 e^{-2ia}.$$ Hence \begin{align*}u(a) &=\frac{1}{2i} e^{-2ia}E(a) - \frac{1}{2i}\int_{0}^{a} \frac{e^{-it}}{t}dt-\frac{1}{2i}c_1 e^{-2ia} +c_2 \\ &=\frac{1}{2i} e^{-2ia}E(a) -\frac{1}{2i}E(-a)-\frac{1}{2i}c_1 e^{-2ia} +c_2\end{align*} We conclude that $$ f(a)=\frac{e^{-ia} \mbox{Ei}(ia)-e^{ia}\mbox{Ei}(-ia)}{2i}+c_1 e^{-ia}+c_2 e^{ia}$$ But I got stuck here, I cannot figure out $c_1$ as well as $c_2$. Also, even $c_1$ and $c_2$ are known, I cannot use the summation to get result for the original question. Is there other way to tackle this problem? Or can I modify my method to make it feasible to get the desired result? Thanks for your attention! AI: This integral may be evaluated using residue theory. Consider the integral $$\oint_C \frac{dz}{(1+z^2) \cosh{a z}}$$ where $C$ is a semicircle of radius $R$ in the upper half plane. As $R \to \infty$, the integral about the semicircle vanishes, and we are left with the original integral equaling $i 2 \pi$ time the sum of the residues of the poles of the integrand within $C$. In this case, the poles within $C$ lie at $z_n = i (n+1/2) \pi/a$ for all $n \in \mathbb{N} \cup \{0\}$, and at $z_+ = i$. Evaluating the residues at these poles (which may be accomplished when the integrand is of the form $p(z)/q(z)$ using the formula $p(z_0)/q'(z_0)$ for a pole at $z=z_0$), we find that $$\int_{-\infty}^{\infty} \frac{dx}{(1+x^2) \cosh{a x}} = \frac{\pi}{\cos{a}} - \frac{2 \pi}{a} \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1/2)^2 \pi^2/a^2 - 1}$$ The sum unfortunately takes the form of a pair of Lerch transcendents $$\begin{align}\frac{2 \pi}{a} \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1/2)^2 \pi^2/a^2 - 1} &= \frac{\pi}{a} \sum_{n=0}^{\infty} (-1)^n \left (\frac{1}{(n+1/2)\pi/a-1}-\frac{1}{(n+1/2)\pi/a+1} \right)\\&= \sum_{n=0}^{\infty} (-1)^n \left (\frac{1}{n+\frac12-\frac{a}{\pi}}-\frac{1}{n+\frac12+\frac{a}{\pi}} \right) \\ &= \Phi\left(-1,1,\frac12-\frac{a}{\pi}\right)-\Phi\left(-1,1,\frac12+\frac{a}{\pi}\right)\end{align}$$ Therefore $$\int_0^{\infty} \frac{dx}{(1+x^2) \cosh{a x}} = \frac{\pi}{2\cos{a}} - \frac12 \left[\Phi\left(-1,1,\frac12-\frac{a}{\pi}\right)-\Phi\left(-1,1,\frac12+\frac{a}{\pi}\right) \right ]$$ It should be noted that $a \ne (k+1/2) \pi$ for some $k \in \mathbb{Z}$. ADDENDUM I should note that, in response to @GrahamHesketh's query, the result above may be shown to be equal to a difference between two digamma functions as follows: $$\int_0^{\infty} \frac{dx}{(1+x^2) \cosh{a x}} = \frac12 \left [ \psi\left(\frac{3}{4}+\frac{a}{2 \pi} \right)-\psi\left(\frac{1}{4}+\frac{a}{2 \pi} \right) \right ]$$ This may be accomplished by noting that $$\frac{\pi}{\cos{a}} = \sum_{n=-\infty}^{\infty} (-1)^n \frac{1}{n+\frac12-\frac{a}{\pi}} = \sum_{n=0}^{\infty} (-1)^n \left (\frac{1}{n+\frac12-\frac{a}{\pi}}+\frac{1}{n+\frac12+\frac{a}{\pi}} \right) $$ $$\psi\left(\frac{1}{4}+\frac{a}{2 \pi} \right) = \sum_{n=0}^{\infty}\left (\frac{1}{n+1}- \frac{1}{n+\frac12 \left (\frac12+\frac{a}{\pi}\right)}\right )$$ $$\psi\left(\frac{3}{4}+\frac{a}{2 \pi} \right) = \sum_{n=0}^{\infty} \left ( \frac{1}{n+1}-\frac{1}{n+1-\frac12 \left (\frac12-\frac{a}{\pi}\right)}\right )$$ To establish equality, note that the result I posted above boils down to $$\frac{\pi}{\cos{a}} - \sum_{n=0}^{\infty} (-1)^n \left (\frac{1}{n+\frac12-\frac{a}{\pi}}-\frac{1}{n+\frac12+\frac{a}{\pi}} \right) = 2 \sum_{n=0}^{\infty} \frac{(-1)^n}{n+\frac12+\frac{a}{\pi}}$$ Equality between the above sum and the difference between the two $\psi$ terms is established by comparing the summands term by term.
H: $u ( x, y) = 2x (1− y)$ for all real $x$ and $y$ I am stuck on the following problem : Let $u ( x, y) = 2x (1− y)$ for all real $x$ and $y$. Then a function v ( x, y), so that $f ( z) = u ( x, y) + iv (x, y)$ is analytic, is (a) $x^2-(y-1)^2$ (b) $x^2+(y-1)^2$ (c) $(x-1)^2-y^2$ (d) $(x-1)^2+y^2$ My Attempt: For analytic function $f(z)$, it must satisfy the Cauchy-Riemann equation. $u_x=v_y \implies v_y=2(1-y) \implies v=2y-y^2 +c$. Now,I am stuck. Can someone help? AI: Hint: you have a slight mistake in your reasoning. The partial with respect to $v_{y} = 2(1-y) \implies v = 2y - y^{2} + h(x)$... using this, which of the answers makes sense?
H: Find a point through which every surface tangent to z=xe^(y/x) passes Find a point through which every plane tangent to the surface $$ z=xe^{\frac{y}{x}} $$ passes. It's not a homework. I know, that I need a normal vector and the point of tangency to find a tangent plane. AI: Equivalently; if $F=z-x\exp(y/x)=0$ then by taking three point as @Christian suggested: $$A(1,0,1),~~B(-1,0,-1),~~C(1,1,e)$$ and an assumed point, say $P(x_0,y_0,z_0)$, we should have a system of $$\nabla F|_A\cdot \vec{AP}=0,~~ \nabla F|_A\cdot \vec{BP}=0,~~\nabla F|_A\cdot \vec{CP}=0$$ And this system should be at least a solution, if presumably there is such that point. I added a graph about the function to think better what would be that point(s).
H: Confidence intervals, Coefficient of variance & box plots here's the background: I've stochastically modelled 3 techniques of culling a badger population over a ten year period. It quite nicely gives me the mean expected final population at the end of the 10-year period but I'd like to meaningfully show the spread of data (preferably in a graphical way). Being hopeless at maths I'm in a muddle over how to show the spread, particularly as the confidence interval around the mean, the coefficient of variance and the box plot of the data don't necessarily agree to the data spread! Could someone be kind enough to explain the difference/real purpose of these 3 methods (or any others I should know about) so I may select the most appropriate one? Thanks! AI: We call confidence interval if one is not interested in estimating the parameters or testing some hypothesis rather he wishes to establish a lower or upper bound or both. Whenever we want to compare the variablity of the two data sets which differs widely in their averages or which are measured in different units, we do not simply calculate the absolute measures of dispersion but we calculate the relative measures of dispersion which are pure numbers independent of the units of measurement. Here we call Coefficient of variance. Lastly Box plot is a graphical display that simultaneously displays several important features of the data such as location or central tendency, spread or dispersion etc. So, I think in your case Box plot will be a good representative of spread of your data. Since you prefer a graphical way Box plot is better than others. That's all from me.
H: Space complexity of the segmented sieve of Eratosthenes It's commonplace to say that without compromising on the time complexity of $O(n\log\log n)$, the space complexity of the sieve of Eratosthenes can be reduced to $O(\sqrt{N})$ using a segmented version of the sieve. This is true, however we can do slightly better for the space complexity. The problem is that different forms for the space complexity appear all over the place. In Crandall and Pomerance we have "if the length of a segment drops below $\sqrt{N}$, the efficiency of the sieve begins to deteriorate." [Crandall and Pomerance, 2nd ed., pp. 121]. To me this implies a $O(\sqrt{N})$ memory requirement. Primesieve claims a space complexity of $O(\sqrt{N})$. On Wikipedia and also here it's claimed that the space complexity is $O(\sqrt{N}\log\log N/\log N )$. I think part of the problem is that people don't clearly distinguish between space complexity in the sense of the number of bits that need to be stored, as compared to the number of numbers, each having multiple bits, that need to be stored. I worked out the complexity myself and am posting it as an answer to my own question in case (1) someone finds a mistake or is helpful enough to comment on my answer or (2) someone else finds this useful. AI: Suppose that we want to sieve up to $N$ using blocks of length $M$ in a segmented sieve of Eratosthenes. The number of terms to sieve one block is proportional to: \begin{align} t_M &= \sum_{p\leq \sqrt{N}}\left[ 1 + \frac{M}{p} \right]\\ &= \pi(\sqrt{N}) + M\log\log N + O(M) \end{align} where the '$1+$' terms come from performing at least one operation per prime $p$ (e.g. a modulus, or checking if at least one multiple of $p$ falls in the block), and the $M/p$ terms come from sieving the block with each prime $p$. This gives a total time complexity to sieve all $N/M$ blocks as: \begin{align} t_N &= \frac{N}{M} \left[ \pi(\sqrt{N}) + M\log\log N + O(M) \right]\\ &= \frac{ 2N^{3/2}}{M \log N} + N\log\log N + O(N), \end{align} where we have used the prime number theorem on $\pi(\sqrt{N})$ and neglected the resulting error terms (one can include them; they're negligible for our purposes). If we choose $$ M = \frac{\sqrt{N}}{ \log{}N\log\log{}N }, $$ then this gives $t_N \in O(n\log\log n)$, which is the desired time complexity. Besides $M$, the only other place nontrivial amounts of memory are used is in the list of sieving primes, which is composed of $\pi( \sqrt{N} ) \in O\left( \frac{\sqrt{N}}{\log(N)} \right)$ numbers, each having $O(\log N)$ digits. Thus, the space complexity of the segmented sieve of Eratosthenes is: $$ S = O\left( \frac{\sqrt{N}}{ (\log N)^2\log\log N } + \frac{\sqrt{N}}{\log N} \right) = O\left( \frac{\sqrt{N}}{\log N} \right) $$ in terms of a number of numbers each of width $\log N$ that need to be stored, or $$ S_B = O\left( \frac{\sqrt{N}}{ \log N\log\log N } + \frac{\sqrt{N}}{\log N}\log N \right) = O\left( \sqrt{N} \right) $$ in terms of the number of bits of storage that are needed.
H: Isomorphism between $\mathbb{C} X \otimes \mathbb{C} X$ and $\mathbb{C} (X \times X)$ Let $G$ be a finite group and $X$ a finite set. Denote by $\mathbb{C}X$ the vector space of functions from $X$ to $\mathbb{C}$. In a book I found the following statement: $\varphi : \mathbb{C} X \otimes \mathbb{C} X \rightarrow \mathbb{C} (X \times X)$ defined by $\varphi(f_1 \otimes f_2) = h_{f_1, f_2}$ there $h : X \times X \rightarrow \mathbb{C}$ is defined by $h_{f_1, f_2}(x, y) = f_1(x) f_2(y)$ is an isomorphism. How do I show that it is linear and injective (surjectivity is trivial)? If we take $f_1 \otimes f_2, f_3 \otimes f_4 \in \mathbb{C} X \otimes \mathbb{C} X$ we need to see that $$\varphi(f_1 \otimes f_2 + f_3 \otimes f_4) = \varphi(f_1 \otimes f_2) + \varphi(f_3 \otimes f_4)$$ but how can we saw something about $f_1 \otimes f_2 + f_3 \otimes f_4$? For scalar multiplication it is simple since we know $(\lambda f_1) \otimes f_2 = \lambda (f_1 \otimes f_2)$. For injectivity, for $\varphi(f_1 \otimes f_2) = \varphi(f_3 \otimes f_4)$ we want to show that $f_1 \otimes f_2 = f_3 \otimes f_4$. That happens if and only if $f_1 = f_3$ and $f_2 = f_4$, right? My problem is that we can have $f_1(x) f_2(y) = f_3(x) f_4(y)$ for all $x, y \in X$ and not have $f_1 = f_3$ and $f_2 = f_4$. AI: As you are learning, proving anything about tensor products can be difficult. One eventually picks up on the right way of doing these things, and this is a good example to work out in full. Here are the steps: An element of $\def\C{\mathbb{C}}\C X \otimes \C X$ can be made into a function on $X \times X$ by writing $(f_1 \otimes f_2)(x,y) = f_1(x) f_2(y)$ for simple tensors and defining the complete map by imposing linearity; i.e. $$\left(\sum_{i = 1}^n f_{1,i} \otimes f_{2,i}\right)(x,y) = \sum_{i = 1}^n f_{1,i}(x) f_{2,i}(y).$$ The more abstract way of saying this is that in order to define a linear map out of a tensor product such as $\C X \otimes \C X$, it is equivalent to define a bilinear map out of the regular product $\C X \times \C X$. This bilinear map is given by the formula for simple tensors, just replacing $f_1 \otimes f_2$ by the pair $(f_1, f_2)$. It automatically extends to a linear map on $\C X \otimes \C X$. Let's call this map $\phi$, for future reference. To check that $\phi$ is an isomorphism we need to use the particulars of the problem. It's usually difficult to show that maps from a tensor product are injective, so we start with surjectivity. Suppose we have a function $g(x,y)$ on $X \times X$ (thus, $g \in \C(X \times X)$); we wish to write it as a sum of functions of the form $f_1(x) f_2(y)$. We do this in the stupidest way imaginable: for each pair $(a,b)$, let $$g_{a,b}(x,y) = \begin{cases} g(a,b) & (x,y) = (a,b) \\ 0 & \text{otherwise} \end{cases}$$ be the function whose only value is the one at $(a,b)$, so we have $$g(x,y) = \sum_{(a,b) \in X \times X} g_{a,b}(x,y),$$ which is a finite sum since $X$ is finite. Now I claim that each $g_{a,b}(x,y)$ is $\phi$ of some simple tensor; i.e. of the form $f_1(x) f_2(y)$. These functions are basically the same and defined like $g_{a,b}:$ $$f_c(x) = \begin{cases} 1 & x = c \\ 0 & \text{otherwise} \end{cases}$$ so that $g(a,b)f_a(x) f_b(y) = g_{a,b}(x,y)$, as you can check. Putting it all together, we have $$g(x,y) = \sum_{(a,b) \in X \times X} g(a,b)f_a(x) f_b(y) = \phi\biggl(\sum_{(a,b) \in X \times X} g(a,b)(f_a \otimes f_b)\biggr)(x,y).$$ This shows that $g = \phi(\sum_{a,b} g(a,b)(f_a \otimes f_b))$, so $\phi$ is surjective. Showing that $\phi$ is injective is, as I said, tricky. In this case, the trick is to use a dimension argument: namely, both $\C X \times \C X$ and $\C (X \times X)$ have the same finite dimension $\def\card#1{\lvert #1 \rvert}\card{X}^2$. The proof of this is that for any finite set $Y$, the vector space $\C Y$ has as a basis exactly the functions I denoted by $f_c$ above, taken over all $c \in Y$. Since the tensor product multiplies dimensions, we have $$\dim{\C X \otimes \C X} = \dim(\C X) \dim(\C X) = \card{X}^2 = \card{X \times X} = \dim \C(X \times X).$$ So we have a surjective linear map $\phi$ between two vector spaces of the same dimension. It follows that the kernel must have dimension zero, i.e. is the zero vector space, so $\phi$ is injective as well. This is the only proof I know if this statement; when $X$ is infinite, $\phi$ is not even surjective, because you can't take an infinite sum over the points of $X \times X$. (However, there are variants in which we impose some kind of topology on the vector spaces that would allow such sums, and this restores the isomorphism when done right.)
H: Remainder of a complex function Dividing $f(z)$ by $z-i$, the remainder is $1-i$ and by dividing $z+i$ the remainder is $1+i$, then what is the remainder when $f(z)$ is divided by $z^2+1$? I just started solution using division algorithm, but I struck at beginning. I am not getting any idea. Please give some hints. I'll try. Give me the solution/hint. AI: As $z^2+1=(z-i)(z+i),$ the first divisor should be $z-i$ If $f(z)=(z-i)(z+i)g(z)+A(z-i)+B(z+i)$ where $g(z)$ is another polynomial So, $f(i)=2iB$ and $f(-i)=-2iA$ and using Polynomial remainder theorem, $f(i)=1-i$ and $f(-i)=1+i$ Comparing the values of $f(i), 2iB=1-i, B=\frac{1-i}{2i}=-\frac{1+i}2$ Similarly, compare values of $f(-i),$ to get the value of $A$ Now, $$f(z)=(z-i)(z+i)g(z)+A(z-i)+B(z+i)=(z^2+1)g(z)+z(A+B)+i(B-A)$$ Can you identify the remainder?
H: Prove unique existence of solution for a Cauchy problem Let $f$ be : $f: [t_0,t_1] \times [x_0 - b, x_0+ b] \longrightarrow \mathbb{R}$ continous and such that $(f(t,x_2) - f(t,x_1))(x_2-x_1) \leq 0$ $\forall t\in [t_0,t_1] $ and $\forall x_1,x_2 \in [x_0-b, x_0+b]$. Prove that the Cauchy problem: $x' = f(t,x)$ $x(t_0) = x_0$ has unique solution. My attemp: I know about a theorem that allows to prove that the Cauchy problem has unique solution if $f$ is Lipschitz but I don't think i can show that with the hipothesis I have. Any hints? AI: Existence of solution follows from the continuity of $f$. Suppose that $x_1(t)$ and $x_2(t)$ are solutions and let $h(t)=(x_2(t)-x_1(t))^2$. Then $$ h'(t)=2(x_2(t)-x_1(t))(x_2'(t)-x_1'(t))=2(x_2(t)-x_1(t))\bigl(f(t,x_2(t))-f(t,x_1(t))\bigr)\le0. $$ Then $h$ is decreasing, non-negative and $h(t_0)=0$. The only possibility is that $h(t)=0$ for $t\in[t_0,t_1]$.
H: Covariance of sums of random variables I need some help understanding an excercise. Let $X_1, X_2, X_3 \sim N(-2,3).$ (right here there is an ambiguity about the second parameter: is it $\sigma$ or $\sigma^2$ ?) First they calculate the variance $$\sigma^2\left(\sum_{i=1}^3 i X_i\right) = \sum_{i=1}^3 i^2 \sigma^2 X_i = 14 \cdot 9 = 126$$ So far I think I understand. This result implies that $\sigma = 3$ . Am I correct? Then they give the following line without any explanation: $$\operatorname{cov}\left(\sum_{i=1}^3 i X_i, \sum_{i=1}^3 X_i\right) = \sum_{i=1}^3 i \cdot \sigma^2 X_i = 54$$ Can you explain what happens here? How did they derive $\sum_{i=1}^3 i \cdot \sigma^2 X_i $? AI: Imagine expanding the product $(X_1+2X_2+3X_3)(X_1+X_2+X_3)$. By independence, or uncorrelatedness (not mentioned, but necessary) we have $\operatorname{Cov}(X_iX_j)=0$ if $i\ne j$. And of course $\operatorname{Cov}(iX_i,X_i)=i\operatorname{Var}(X_i)$. Remark: It is distressing that $N(a,b)$ has two interpretations. If the quoted calculation is correct, $b=\sigma$ is the intended interpretation. A gppd solution of the problem is to not use the abbreviation.
H: complex equation to be solved I need to find all solutions to the complex equation $e^{1/z} = \sqrt{e}$ Then I need to show that all these solutions are on the circle $|z-1|=1$ Using the fact that $e^{2\pi i}=1$, I solved the equation to find $z = \frac{2}{1-4ik\pi}$ but that is not what's in the back of my book. Any help from the math community is welcome. AI: Your answer and the book's answer are essentially the same since: $$\frac{2}{1-4ik\pi} \cdot \frac{1+4ik\pi}{1+4ik\pi}= \frac{2+8ik\pi}{1+16k^2\pi^2}$$ That means your answer for $k$ gives the book's answer for $-k$. Andre's answer shows why all the solutions are are the circle $|z-1|=1$.
H: Linear Transformation - Distribution laws proof How to prove that? $f\circ(g+h) = f\circ g + f\circ h \ \text{ and }\ (f+g)\circ h = f\circ h + g\circ h$ Thanks in advance AI: First, let $V,W$ be vector spaces over the same field of scalars. Also let $f,g,h : V \to W$ be linear transformations. To prove your first proposition let $x \in V$. Then we have by definition that: $$(f\circ(g+h))(x)=f((g+h)(x)),$$ but since addition is defined pointwise we have $(g+h)(x)=g(x)+h(x)$. Now, combining this with the fact that $f$ is linear we have: $$(f\circ(g+h))(x)=f(g(x))+f(h(x)),$$ and since again addition is defined pointwise this induces $f\circ (g+h) = f\circ g + f\circ h$. To prove the second, let $x \in V$ again. By definition we know that: $$((f+g)\circ h)(x)=(f+g)(h(x)),$$ but since addition is defined pointwise we have that this is equivalent to: $$((f+g)\circ h)(x)=f(h(x))+f(g(x)),$$ and because addition was defined pointwise again this induces $(f+g)\circ h = f\circ h + g \circ h$. Now note that the first proposition depended on two things: pointwise definition of addition of functions and linearity. The second one depended only on the pointwise definition of functions, so that it would work even for arbitrary functions.
H: When do we need parenthesis to change order of operations? A few questions about order of operations: $1$) In an expression such as $1+3+3^2$, it is legal to simplify to $4+3^2$, a violation of the grade school order of operations. In this case, are we adding an implicit set of parenthesis and then simplifying? i.e. making the expression $(1+3)+3^2$? So if you simplify $1+2+(3+5)$ to $3+(3+5)$ without adding the parenthesis first, are you wrong in the most pedantic sense? I guess the question could be phrased like this: does order of operations mean a strict order of how any equation should be multiplied, or does it refer to priority of operations? I.e. that it simply means that higher order operations must be completed before their operands are combined with lower order operators, or does it actually imply a strict $1\times2\times3$ for how to simplify? $2$) is implicit multiplication the same thing as explicit? I.e. is $A*(b)$ the same as $A(b)$? $3$) Is $A*(b)$ two expressions while $A(b)$ is one? AI: In strictest sense, an expression such as $1+2+3$ isn't even defined, only $(1+2)+3$ and $1+(2+3)$ are. It is the law of associativity that allows us to interchange the latter two expressions and motivates the usual convention of dropping parentheses altogether in such sums. In view of this, the "right" to evaluate a sum in arbitrary order is already implicit in the legitimiacy of leaving out parentheses. Similar for products. The fact that we write $1+2\cdot 3$ without parentheses, although $(1+2)\cdot 3$ and $1+(2\cdot 3)$ differ, is not based on an arithmetic law, but rather on the convention that multiplication and division precede addition and subtraction. That is, $1+2\cdot 3$ is really a shorthand for $1+(2\cdot 3)$ whereas there is no shorthand for $(1+2)\cdot 3$. Finally, a similar convention, namely that the non-associative operation of subtraction (as well as division) are to be done left to right. That is, of the two expressions $(1-2)-3$ and $1-(2-3)$, only the first has a shorthand notation of $1-2-3$. Note however that exponentiation is not associative, e.g. $(2^3)^4\ne 2^{(3^4)}$. Since the former can be written simply as $2^{3\cdot4}$, we have the convention that the expression $2^{3^4}$ is a shorthand for $2^{(3^4)}$. I think the best way to view this is in for of a tree where each node is either a leaf labelled with a number or (if the node has a left and a right subtree) an operator $+,-,\cdot,/$ or epxonetiation. You may additionally introduce negative signs and functions as unary operators (nodes with one subtree). For associative operations such as addition and multiplication, you may loosen these rules and allow more than two subtrees. This tree determines the order of operation (note that there are no parentheses needed to build the tree): In order to perform an addition, subtraction etc. you need to first compute the two subtrees and then combine these two results accordingly. Note that the overall sequence of operation is only very loosely defined/restricted by this: You can first evaluate the left tree, then the right tree, or vice versa, or intertwined. Only the "top" operation must be last. This is all there is behind the rules of precedence and parentheses: They clarify which of several possible trees is intended.
H: Does there exist such a closed subspace of normed linear space let $(X,|| || )$ be a norm linear space. And $M$ be a closed subspace of norm linear space .does there exist a closed subspace $N$ such that $X=M \oplus N $ . I know such an subspace $N$ exist .but i am not conform about such an $N$ is closed or not . AI: In short, the answer is no in general. A well-known counterexample is given by $c_0$ in $\ell^\infty$. This was first proved by Phillips in 1940. There is a long history behind that, known as the complementary subspace problem for a Banach space $X$. In 1971, Lindenstrauss and Tzafriri proved that given a Banach space $X$, every closed subspace is topologically complemented in $X$ if and only if $X$ is isomorphic to a Hilbert space. Here are some details, to begin with a clarification of the notion of complement in a normed vector space, and in particular in a Banach space. 1) If two subspaces $M,N$, not necessarily closed, satisfy $X=M\oplus N$, we say that $M$ is algebraically complemented by $N$ in $X$. A stronger notion is that of topologically complemented. But these coincide in Banach spaces for pairs of closed subspaces. If $X=M\oplus N$ algebraically, with $M,N$ subspaces of $X$, we have an isomorphism $T:M\times N\longrightarrow X$ by $T(x,y)=x+y$. Putting, say, the norm $\|(x,y)\|:=\|x\|+\|y\|$ on $M\times N$, and denoting by $P:x+y\longmapsto x$ the projection onto $M$ parallel to $N$, the following are equivalent: $T$ is a homeomorphism both $M$ and $N$ are closed and $P$ is bounded. In this case, we say that $M$ is topologically complemented by $N$ in $X$. If $X$ is a Banach space, and if $M,N$ are two closed subspaces of $X$, the closed graph theorem yields: $X=M\oplus N$ algebraically if and only if topologically. 2) So when $X$ is a Banach space, you are asking whether every closed subspace is topologically complemented. This is of course true when $X$ is a Hilbert space as it suffices to take $N=M^\perp$. Therefore it is true if $X$ is isomorphic to a Hilbert space. It is also true in a normed vector space with the additional assumption that $M$ has finite dimension of finite codimension. A much more difficult result which appeared in 1971 after a long history of partial results is due to Lindenstrauss and Tzafriri: every infinite-dimensional Banach space which is not isomorphic to a Hilbert space contains a closed subspace which is not topologically complemented.
H: Prove by induction that $ 1^2+2^2+...+(n-1)^2 In Apostol's «Calculus I» on page 33 there is the following proof by induction: To prove: $$ 1^2+2^2+...+(n-1)^2<n^3/3<1^2+2^2+...+n^2 $$ Solution: Consider the leftmost iequality first, and left us refer to this formula as $A(n)$. It is easy to verify this assertion directly for the first few values of n. Thus, for example, when n takes the values 1, 2, 3, the assertion is true. Our object is to prove that $A(n)$ is true for every positive integer n. The procedure is as follows: Assume the assertion has been proved for a particular value of n, say for $n = k$. That is, assume we have proved $$ A(k): 1^2+2^2+…+(k-1)^2<\frac{k^3}{3} $$ for a fixed $k\geqslant 1$. Now using this, we shall deduce the corresponding result for $k+1$: $$ A(k+1): 1^2+2^2+…+k^2<\frac{(k+1)^3}{3} $$ Start with $A(k)$ and add $k^2$ to both sides. This gives the inequality $$ 1^2+2^2+…+k^2<\frac{k^3}{3}+k^2 $$ To obtain $A(k+1)$ as a consequence of this, it suffices to show that $$ \frac{k^3}{3}+k^2<\frac{(k+1)^3}{3} $$ So my question is why does it suffice to to show the last inequality is true to finish the proof by induction? I don't see how is $A(k+1)$ follows from $A(k)$. It is clear to me that $$ A(k) + k^2 < A(k+1) $$ But it is unclear why does it make sense in this case. AI: You know $A(k)$:$$1^2+2^2+\dots+(k-1)^2 < k^3/3$$. Add $k^2$ to both sides and you know: $$1^2+2^2+\dots+k^2 < k^3/3 + k^2$$ If you know that $k^3/3+k^2 < (k+1)^3/3$, the you know $A(k+1)$ since: $$1^2+2^2+\dots+k^2 < (k+1)^3/3$$
H: Square roots of complex numbers I know that the square root of a number x, expressed as $\displaystyle\sqrt{x}$, is the number y such that $y^2$ equals x. But is there any simple way to calculate this with complex numbers? How? AI: If you represent the complex number $z$ in polar form, i.e. $$z=re^{i\theta} = r(\cos \theta+i\sin\theta)$$ where $r>0, \theta \in [0, 2\pi)$. Then the square roots of $z$ are $$\sqrt z = \pm \sqrt re^{i\theta/2}$$ In general the $k$ $k$th roots of $z$ are $\sqrt[k]r\exp\left(i\times\left(\frac{2\pi j}{k}+\frac\theta k\right)\right)$ for $j=0,1,2,...,k-1.$
H: how you show that $[\frac{a}{n} ]^2=1$, where $a \in \mathbb{Z}$ and $n$ is odd integer? $[\frac{109}{1925} ]=[\frac{109}{5} ]^2[\frac{109}{7} ][\frac{109}{11} ] = [\frac{4}{5} ]^2[\frac{4}{7} ][\frac{-1}{11} ] = (?)^2[\frac{2^2}{7} ][\frac{-1}{11} ] = (?)^2\cdot 1 \cdot (-1)^{\frac{11-1}{2}}=1 \cdot 1 \cdot (-1) = -1 $ So how you show that $[\frac{a}{n} ]^2=1$, where $a \in \mathbb{Z}$ and $n$ is odd integer?( $[\frac{a}{n} ]$ is Jacobi symbol and when $p$ is prime, then $[\frac{a}{p} ]$ is Legendre symbol). AI: The Jacobi symbol is defined as a product of Legendre symbols. Legendre symbols are $\pm 1$, so their square is always $1$. The same is therefore true of Jacobi symbols. In particular, in the calculation, there was no reason to transform $(109/5)^2$ to $(4/5)^2$. The result is correct, but the fact that $(109/5)^2=1$ requires no calculation.
H: "Center" of a spherical triangle I have a very deficient background in geometry, so I come across questions like these and I'm not sure how to verify my intuition. Consider three points in $\mathbb{R}^3$, given by position vectors, lying on a sphere centered at the origin. These define both a spherical triangle and a plane. Is the sum of the position vectors normal to the plane? I think this is true but I can't figure out how the details go. Also, it seems like projecting the triangle onto the plane should put the sum of the vectors in some sort of center for the triangle, would that be the circumcenter? Thank you in advance! AI: Call the three vectors $a,b,c$. In order that their sum $a+b+c$ be perpendicular to the plane, its dot product with $a-b$ would have to be $0$; likewise its product with $a-c$ and $b-c$. $$ (a+b+c)\cdot(a-b) = \underbrace{a\cdot a - a\cdot b+b\cdot a-b\cdot b} + c\cdot(a-b). $$ The sum over the $\underbrace{\text{underbrace}}$ is clearly $0$, since $a\cdot a=b\cdot b$. But if you move $c$ around without changing $a$ or $b$, the value of the last term can change, so it won't always be $0$. Where the sum would go would depend on what kind of projection is involved. If one projects each point $p$ in $\mathbb R^3$ to the intersection between the specified plane of the line through $p$ and the center of the sphere, then the sum would project to the barycenter. The reason is that the barycenter is $(a+b+c)/3$, and that's where the line through $a+b+c$ and the origin intersects the plane. If you mean orthogonal projection, I think you'd usually get a different point. Here is an amazing reference work: http://faculty.evansville.edu/ck6/encyclopedia/ETC.html Here's another: http://faculty.evansville.edu/ck6/tcenters/
H: Essential singularities of $\exp(z)$ Could you explain to me why $z = \infty$ is an essential singularity of function $\exp(z)$ ? What about $z$ = 0, is it essential singularity? AI: If $z\to\infty$ along the positive real axis then $\exp(z)\to\infty$. If $z\to\infty$ along the negative real axis then $\exp(z)\to0$. If $z\to\infty$ along the imaginary axis then $\exp(z)$ oscillates between $1$ and $-1$. So $\lim\limits_{z\to\infty}\exp(z)$ does not exist within $\mathbb C\cup\{\infty\}$.
H: $f_{n+1}(x):= \int_a ^x f_n(t)dt$, $\sum_{m=1} ^{\infty} f_m(x)$ is uniformly convergent Let $f_1 : [a,b] \rightarrow \mathbb{R}$ be an integrable function. Let's define a sequence $(f_n)$, $ \ \ f_n : [a,b] \rightarrow \mathbb{R}$ as $f_{n+1}(x):= \int_a ^x f_n(t)dt$.' Prove that $\sum_{m=1} ^{\infty} f_m(x)$ is uniformly convergent on $[a,b]$. Could you help me with that? Thank you. AI: Denote $$M:=\int_a^b|f_1(t)|dt<\infty.\tag{1}$$ By definition, $$|f_{n+1}(x)|\le \int_a^x |f_n(t)|dt\quad \forall x\in[a,b],~\forall n\ge 1.\tag{2}$$ Using $(1)$ and $(2)$, by induction, it is easy to show that $$|f_{n+1}(x)|\le \frac{M (x-a)^{n-1}}{(n-1)!},\quad \forall x\in[a,b],~\forall n\ge 1.\tag{3}$$ The conclusion follows from $(3)$.
H: Find the smallest natural number which is 4 times smaller than the number written with the same digit but in the reverse order. The question says "Find the smallest natural number which is 4 times smaller than the number written with the same digit but in the reverse order." I tried to solve it in this way: New number = 4*(original number) Thus, new number is a multiple of 4. SO, its unit diits must be 0,2,4,6,8. Now, how to proceed after this ? AI: We also know that the first digit of the original number has to be $2$, which means the last digit must be $8$. So we have: $$8\,a\,b\,2=4\cdot(2\,b\,a\,8)$$ (You can check to see that $3$ digit numbers will not work). It follows that $b$ can only be $0,1,$ or $2$. If we multiply, we obtain the following equation: $$4a+3\equiv b\operatorname{mod}10\\$$ From this it follows that $a$ is either $2$ or $7$ and $b=1$. This leaves only two options, and one of them works.
H: Max frequency of a signal? Having $$ f(x) = \cos(x) + \sin(10x)$$ How Can I know which is the max frequency of this signal? I need it to set the right Nyquist frequency ($2\cdot\max\text{frequency}$) I can use Matlab if it's needed AI: The $\sin(10x)$ term has a frequency of $\frac {10}{2\pi }$ because $x$ must increase be $\frac {2\pi }{10}$ to make a full cycle.
H: Need to calculate my final marks - help I don't know the formula to calculate my marks for a subject. The first tests accounts for 38% of the overall grade whereas the final exam accounts for 62%. Let us say that on the first test I got a mark of 80% and the second one 60%. What is my final grade and how do I calculate that? Thanks in advance! AI: If there was one test (during the course) on which you earned $80\%$, and a final exam on which you earned $60\%$, then you simply multiply scores earned by their respective weights/proportions, $$(80\times 0.38)+(60\times 0.62) = 67.6\%\quad\text{grade for course}$$ If you earned the grades of $80$ and $60$ on two tests during the course and need to determine what grade you'll need on the final to pass the class, then let's say you need to earn a $70\%$ in the course, overall, to pass. Let $P =$ needed grade on the final. Then you need to solve for $P$ given: $$\frac{(80 + 60)}{2}\times 0.38 + 0.62 P = 70$$ As it happens, in this scenario, you will have earned an average grade of $70$ on the tests, and will therefore need to earn at least $70\%$ on the final, as well, to earn a course grade of at least $70\%$
H: When does a PDE solve a variational problem? I understand that for a functional $J[f]$ on the space of differentiable functions $f$ on some domain, setting $\delta J[f]|_{f=f_0} = 0$ yields a (possibly nonlinear) partial differential equation in $f_0$, the $f$ that minimizes $J$. Solutions are non-degenerate if extrema of $J$ are locally unique, and the uniqueness of the PDE is related to the number of extrema of $J$. My question is if, given a known (nonlinear) partial differential equation with solutions $u$, there is a general method to construct a functional that the $u$ will minimize? I am not interested in functionals of the form $(f-u)^2$ and other functionals that contain the explicit form of the solutions $u$, and would like to know if there is a method to find $J$ given the terms in the PDE. I'm aware that this can be done for all linear equations such as $Lu=v$ in a straightforward way; namely, for $L\cdot u=v$, $J[u] = \langle u\cdot L\cdot u \rangle - \langle v \cdot u \rangle - \langle u \cdot v \rangle$; but don't know of any extensions to nonlinear PDEs. AI: The problem of determining a Lagrangian (and so the action functional) such that the corresponding Euler-Lagrange equation equal a given PDE is known as the inverse problem of variational calculus. See wikipedia (http://en.wikipedia.org/wiki/Inverse_problem_for_Lagrangian_mechanics) for an introduction and nlab (http://ncatlab.org/nlab/show/variational+bicomplex) for a mathematical generalization of this idea (especially the references in the later are useful: Takens/Zuckermann/...).
H: How to prove that operator is not compact in $L_2 (\mathbb{R})$ I have the operator $(Af)(x) = \int _{\mathbb{R}} e^{{-(x-t)^2}/2} f(t) dt$. It seems to me that it isn't compact and I'm looking for some general <=> criterion for integral operators to be compact on $L_2 (\mathbb{R})$. AI: Recall that if $g$ is $L^1$, then $$ T_g:f\longmapsto f\ast g $$ is bounded from $L^p$ to $L^p$ since $\|f\ast g\|_p\leq \|f\|_p\|g\|_1$. So with the $L^1$ function $g(t)=e^{-\frac{t^2}{2}}$, in particular, $T_gf(x)=\int e^{-\frac{(x-t)^2}{2}}f(t)dt=(f\ast g)(x)$ is bounded from $L^2$ to $L^2$. Note that $T_g$ is unitarily equivalent to the multiplication operator by $\hat{g}$ via the Fourier transform. So $T_g$ is compact if and only if $$ h\longmapsto \hat{g}h $$ is compact from $L^2$ to $L^2$. As is well-known, a multiplication operator by $\theta\in L^\infty$ on $L^2(\mathbb{R})$ is compact if and only $\theta=0$ a.e. Since $\hat{g}\neq 0$ a.e., it follows that $T_g$ is not compact.
H: Non-integer bases and irrationality I read somewhere: When it comes to properties like prime, irrational, rational, divisible by 2, etc., nothing changes when you change base. But I'm not sure about the rational/irrational one. If you use non-integer bases, even integer numbers can become ugly expressions (See 3 in base 2.5 in Wolfram) The point is: is that number irrational, or it just has a lot of repeating decimals? Either way, how could you prove it? AI: The distinction you are missing is that there are numbers, which have certain properties, and then there are numerals, which are sequences of symbols that we use to represent numbers. The base is a choice about how to represent a number as a numeral. For example, in base 10, the number 100 is represented with the symbol 100. In base 7, it is represented as 202; in base 13 it is represented as 79. In all cases, it is a perfect square; it is an even number; it is a multiple of 5, and so forth, because it is the same number. It is still equal to $36+64$, regardless of whether we write that equation as 36 + 64 = 100 or as 51 + 121 = 202 or as 2A + 4C = 79. The only questions that change with the base have to do with the representation, with the particular symbols that are used to write down the numeral. The base-10 representation of the number 100 ends with a zero, and the base-7 representation of the same number does not. A number is divisible by 10 if and only if its base-10 numeral ends with a zero, so you can conclude that 100 is divisible by 10; it is divisible by 7 if and only if its base-7 numeral ends in a 0, and so you can conclude that 100 is not divisible by 7. 3 is a rational number and an integer, regardless of how it is written or in what base. It is true that a rational number will have a repeating or terminating representation in an integer base, and an irrational number will not, but that is separate from the question of whether it is a rational number. Writing 3 in "base $2.5$" means observing: $$ 3 = \color{red}1\cdot 2.5 + \color{red}0\cdot 1 + \frac{\color{red}1}{2.5} + \frac{\color{red}0}{(2.5)^2} + \cdot\frac{\color{red}1}{(2.5)^3} + \cdots \\ = \color{red}{10.101\ldots}_{(2.5)} $$ but the fact that this representation does or doesn't repeat has nothing to do with whether the number 3 is an integer.
H: Why if $G_n$ is a subgroup of G then G is abelian? I found this exercise, that says: Let $G$ be a group and let $G_n=\{g^n : g\in G\}$. Under what hypothesis about $G$ can we show that $G_n$ is a subgroup of $G$? Apparently the answer is that G has to be abelian, but I don't see why. Reference: Fraleigh p. 59 Question 5.56 in A First Course in Abstract Algebra AI: If $G$ is abelian, then $G_n = \{ g^n : g \in G \}$ is closed under multiplication since $g^n \cdot h^n = (gh)^n$ by repeatedly commuting $g$s past $h$s. $G_n$ is always closed under inversion. If $G$ is a group with the property that $g^n h^n = (gh)^n$ for all $g,h \in G$, then $G$ is called $n$-abelian. 2-abelian groups are abelian, but 3-abelian groups need not be abelian. Furthermore, $G_n$ may happen to be a subgroup even if $g^n h^n \neq (gh)^n$ as all that is required is that $g^n h^n = k^n$ for some $k \in G$. For instance if $G$ is dihedral of order 8, then $G_3 = G$ is a subgroup, but $G$ is not 3-abelian.
H: How to describe the one point compactification of a space In my Topology course we defined the one point compactification of a Hausdorff space $\left(X,\tau\right) $ to be a compact Hausdorff space $\left(Y,\tau^{'}\right) $ such that $X\subseteq Y$, $\tau\subseteq\tau^{'}$ and $\left|Y\backslash X\right|=1 $. More specifically one such construction is given by taking $Y=X\cup\left\{ \infty\right\}$ and defining: $$\tau^{'}=\tau\cup\left\{ U\subseteq Y\;|\;\infty\in U\:\wedge\; Y\backslash U\;\mbox{is compact}\right\}$$ That is all nice and well but it gives me no clue whatsoever as to how to find a "nice representation" to the one point compactification of a given space. In particular I need to describe an embedding in $\mathbb{R}^{n}$ of the one point compactifications of each of the following: $\left[0,1\right]$ $\left(0,1\right)$ $\mathbb{N}$ Help would be greatly appreciated. EDIT: I just noticed there was another section to the question about the compactification of $X=\left(0,1\right)\times\left\{ 0,1\right\}$. Since this is all in the subspace topology of $\mathbb{R}^{2}$ which is metric I find it more convenient to use the "heuristic" of trying to answer the questions "what are the sequences in $X$ that don't converge in $X$ and how to add one point to the space so all those sequences would converge to it". In the case of $\left(0,1\right)$ the answer was folding the line segement into a circle. In this case since I essentially have two parallel "line segments" in $\mathbb{R}^{2}$ it would seem the nicest way to achieve what I want would be to fold them both into an $8$ shape but I can't really see what sort of mapping would do that for me... EDIT 2: I tried tackling the problem in another way, instead of trying to fold $\left(0,1\right)\times\left\{ 0\right\}$ upwards and $\left(0,1\right)\times\left\{ 1\right\}$ downwards to form two ellipses with one point in common I decided to copy $\left(0,1\right)\times\left\{ 0\right\}$ and $\left(0,1\right)\times\left\{ 1\right\}$ into two circles with one point $\left(0,0\right)$ in common, I did this using the following mapping: $$f\left(x,y\right)=\begin{cases} \left(\sin\left(2\pi x\right),\cos\left(2\pi x\right)-1\right) & \left(x,y\right)\in\left(0,1\right)\times\left\{ 0\right\} \\ \left(\sin\left(2\pi x\right),1-\cos\left(2\pi x\right)\right) & \left(x,y\right)\in\left(0,1\right)\times\left\{ 1\right\} \end{cases}$$ If I'm not mistaken this should be a homeomorphism between $X$ and the union of two circles of radius 1, one centered at $\left(0,-1\right)$ and one at $\left(0,1\right)$ minus the point $\left(0,0\right)$. Then the one point compactification of said union would be obtained by adding the point $\left(0,0\right)$ (the union of the circles is closed as the union of two closed sets and is also bounded and thus compact by Heine-Borel theorem). This compactification would be in turn homeomorphic to the compactification of $X$. Could someone confirm if this train of thought indeed arrives at its intended destination? AI: The first thing you have to do to identiify the one-point compactification is to uidentify the compact subsets of the space in question. $[ 0 , 1 ]$ is a compact (Hausdorff) space, and so we know that the compact sets are simply the closed subsets of $[0,1]$. Therefore if $[ 0 , 1 ] \cup \{ * \}$ is the one-point compactification of $[0,1]$, then the open neighbourhoods of $*$ look like $\{ * \} \cup U$ where $U$ is any open subset of $[0,1]$. In particular, as $\varnothing$ is a open subset of $[0,1]$ it follows that $\{ * \} \cup \varnothing = \{ * \}$ is an open nieghbourhood og $*$ in $[0,1] \cup \{ * \}$; i.e., $*$ is an isolated point in $[ 0,1 ] \cup \{ * \}$. $(0,1)$ is actually homeomorphic to the real line $\mathbb{R}$, and so it doesn't hurt to instead consider the one-point compactification of $\mathbb{R}$. By the Heine-Borel Theorem the compact subsets of $\mathbb{R}$ are the bounded closed sets. In particular, every compact subset of $\mathbb{R}$ is a subset of a compact set $[a,b]$ for $a < b$. It follows that the sets of the form $( b , + \infty ) \cup \{ * \} \cup ( - \infty , b )$ (where $a < b$) form a basis for the open neighbourhoods of $*$ in the one-point compactification. $\mathbb{N}$ is a discrete space, and so the compact subsets are the finite subsets. In particular, every compact set is a subset of a set of the form $[n] = \{ 0 , \ldots , n \}$ where $n \in \mathbb{N}$. It follows that a basis of the open neighbourhoods of $*$ in the one-point compactification consists of all sets of the form $\{ n+1 , n+2 , \ldots \} \cup \{ * \}$ where $n \in \mathbb{N}$.
H: Modeling the coin weighting problem Suppose we have $n$ coins with weights $0$ or $1$ and a scale for weighting them. We would like to determine the weight of each coin by minimizing the number of weightings. The book that I am reading states that the above problem can be modeled in the following way. We say that the subsets $S_1,\ldots,S_m$ of $\{1,\ldots,n\}$ are determing if any $T \subseteq \{1,\ldots,n\}$ can be uniquely determined by the cardinalities $|S_i \cap T|$ for $1 \leq i \leq m.$ The minimum number of weightings is then equivalent to the least $m$ for which a determing sequence of sets exists. My question is. How exactly does this reduce to the coin weighting problem? AI: Let the coins be $C_1,\dots,C_n$. Let $\mathscr{T}$ be the set of coins of weight $1$, and let $$T=\big\{k\in\{1,\dots,n\}:C_k\in\mathscr{T}\big\}\;,$$ the set of indices of those coins. For $k=1,\dots,m$ let $\mathscr{S}_k=\{C_k:k\in S_k\}$, and let $w_k$ be the total weight of the coins in $\mathscr{S}_k$. Since each coin has weight $1$ or $0$, $w_k$ is the number of coins in $\mathscr{S}_k$ of weight $1$, which is $|\mathscr{S}_k\cap\mathscr{T}|$. Thus, $$w_k=|\mathscr{S}_k\cap\mathscr{T}|=|S_k\cap T|\;.$$ If you weigh each of the sets $\mathscr{S}_k$, those $m$ weighings will give you the numbers $|S_k\cap T|$ for $k=1,\dots,m$. If the sets $S_k$ are determining, those $m$ numbers uniquely determine the set $T$, from which you immediately get $\mathscr{T}=\{C_k:k\in T\}$.
H: can the following sum be simplified For $n \ge 3$, define $$f(n) = \sum_{k=3}^n {n \choose k}{k-1 \choose 2}.$$ Is there a closed form expression for $f$? AI: Note that $$\binom{n}k\binom{k-1}2=\frac12\binom{n}k(k-1)(k-2)\;,$$ where the $(k-1)(k-2)$ looks like the coefficient of the second derivative of $x^{k-1}$. That suggests looking at something like $$g(x)=\sum_{k=3}^n\binom{n}kx^{k-1}$$ and differentiating twice with respect to $x$ to get $$g''(x)=\sum_{k=3}^n\binom{n}k(k-1)(k-2)x^{k-2}\;:$$ clearly we then have $f(n)=\frac12g''(1)$, and all that remains is to get a closed form for $g(x)$. But $g(x)$ can be written $$g(x)=\frac1x\sum_{k=3}^n\binom{n}kx^k\;,$$ and you know a closed form for $\sum_{k=0}^n\binom{n}kx^k$, so all that’s needed now is a little algebra.
H: What is $\mathrm{rank}(A)$ if it is known that $A^4 = 0$? Given $A \in M_{5 \times 5} (\mathbb F)$, what are the options for $\mathrm{rank}(A)$ if it is known: (I) $A^4 = 0$ (II) $A^3 = 0$ (III) $A^2 = 0$ Now, I am very new to Jordan Forms and this is related, but I have no clue whatsoever on the relationship between Jordan Form, and knowing the Rank of a matrix. All I know is that to calculate how much Jordan blocks of size $k x k$ there are involves an equation using ranks. Any help will be appreciated! AI: Let $J_A$ be a JNF for $A$. You know that for some invertible $P$ you have $A=PJ_AP^{-1}$, therefore $0=A^4=PJ_A^4P^{-1}$, therefore $J_A^4=0$. Now think about what $J_A$ looks like and consider its powers, this and this. Also recall that $\operatorname{rank}(A)=\operatorname{rank}(J_A)$, (why?).
H: How to evaluate $\lim_{n \to \infty} \sqrt[n]{\frac{1}{n^2}} $. I was trying to find the radius of convergence of the power series $$\Sigma \frac{2^nz^n}{n^2}$$ and with the ratio test, found that the radius of convergence is $1 \over 2$. However, I am practicing on finding limits and I would like to know how to proceed using the ratio test. So I was trying to evaluate $$\lim_{n \to \infty} \sqrt[n]{|\frac{2^nz^n}{n^2}|} = |2z| \lim_{n \to \infty} \sqrt[n]{\frac{1}{n^2}}$$ From the ratio test I can see that the limit I want to solve for must go to $1$, and I can see that the form looks close to $$\lim_{n \to \infty} \sqrt[n]{n}$$ which I know that converges to $1$. How can I use this ? AI: Just use multiplication and division of limits. Remember that if $a_n \ne 0$ and $\lim_{n \to \infty} a_n = a$, then $$\lim_{n \to \infty} \frac{1}{a_n} = \frac{1}{a}$$ $$\lim_{n \to \infty} a_n^2 = a^2$$ Now just apply the case where $a_n = n^{\frac{1}{n}}$ and $a = 1$.
H: Sum of alternating sign squares of integers stuck with proof by induction Note that $$ A(1):1=1\\A(2):1-4=-(1+2)\\A(3):1-4+9=1+2+3\\A(4):1-4+9-16=-(1+2+3+4) $$ Let us set up the $A(k)$: $$ A(k)=1-4+9-…+(-1)^{k+1}k^2=(-1)^{k+1}(1+2+…+k) $$ Setting up $A(k+1)$: $$ A(k+1)=1-4+9-…+(-1)^{k+1+1}(k+1)^2=(-1)^{k+1+1}(1+2+…+k+(k+1)) $$ Knowing that: $$ 1+2+…+n=\frac{n(n+1)}{2} $$ We simplify right hand sides of $A(k)$ and $A(k+1)$: $$ A(k)=(-1)^{k+1}(1+2+…+k)=(-1)^{k+1}\frac{k(k+1)}{2}\\A(k+1)=(-1)^{k+1+1}(1+2+…+k+(k+1))=(-1)^{k+1+1}\frac{(k+1)(k+2)}{2} $$ Then I am trying to show that right hand side of $A(k+1)$ is equal to $A(k) + (-1)^{k+1+1}(k+1)^2$, but it does not work for me. That is what I am getting: $$ A(k+1)=(-1)^{k+1+1}\frac{(k+1)(k+2)}{2}=(-1)^{k+1+1}\frac{k^2+2k+k+2}{2}=(-1)^{k+1+1}(\frac{k(k+1)}{2}+(k+1))=(-1)A(k)+(-1)^{k+1+1}(k+1)=-(A(k)+(-1)^{k+1}(k+1)) $$ What am I doing wrong? How to prove $A(k)$ by induction? AI: To prove it by induction, first show by calculation that $A(1)$ is true. Then we assume $A(n)$ is true and try to show $A(n+1)$ is true. So we assume $\displaystyle \sum_{i=1}^n (-1)^{i+1} i^2=(-1)^{n+1}\frac 12 n(n+1)$. Now we evaluate the left side of $A(n+1)$ $$\begin {align} \sum_{i=1}^{n+1} (-1)^{i+1} i^2&=(-1)^{n+1}\frac 12 n(n+1)+(-1)^{n+2}(n+1)^2 \\&=(-1)^{n+2}\left((n+1)^2-\frac 12n(n+1) \right)\\&=(-1)^{n+2}\left((n+1)(n+1-\frac 12n) \right)\\&=(-1)^{n+2}\left(\frac 12(n+1)(n+2) \right)\end {align}$$ and we have derived $A(n+1)$
H: Conditional independence regarding fourth event Let's two events $S1$ and $S2$ are conditionally independent given the event $A$, i.e., $P(S_1|S_2,A) = P(S_1|A)$ and $P(S_2|S_1,A) = P(S_2|A)$ If $B$ is an arbitrary event, does the following probability hold? $P(S_1|S_2,A,B) = P(S_1|A,B)$? AI: No. We can ignore $A$-let it be certainly true. Let $S_1$ be a coin flip is heads and $S_2$ be a die roll is even. Now let $B$ be coin flip heads iff die roll is even. Then $P(S_1|S_2,B)=1, P(S_1|B)=\frac 12$
H: General Integral Formula I know how to find the integral below, but I would like to know if there is any clever or general formula for the integral, since my method involves simple polynomial division... $\int \frac{1}{1+\sqrt[n]x}dx$ Thanks. AI: $$\int \frac{1}{1+\sqrt[n]x}dx = \sum_{k=0}^{\infty}(-1)^k \int x^{\frac{k}{n}}dx= \sum_{k=0}^{\infty}(-1)^k \frac {n\,x^{\frac{k}{n}+1}}{k+n} = nx\Phi(-x^\frac{1}{n},1,n), $$ where $\Phi(z,s,\alpha)$ is the LerchPhi function $$ \Phi(z, s, \alpha) = \sum_{n=0}^\infty \frac { z^n} {(n+\alpha)^s}. $$
H: Is thi set of vectors, $\{(2, 1), (3, 2), (1, 2)\}$, is linearly dependent or independent? Given a set of vectors S = $\left\{ \begin{bmatrix} 2 \\ 1 \end{bmatrix}, \begin{bmatrix} 3 \\ 2 \end{bmatrix}, \begin{bmatrix} 1 \\ 2 \end{bmatrix} \right\} $ Find out if the vectors are linearly dependent or independent I know that for a set of vectors to be linearly dependent, they must satisfy the below equation: $$c_1v_1 + c_2v_2 ... c_nv_n = \mathbf 0 $$ such that not all $c_i$ are zero. So, I decided to apply Gauss Elimination and I got the following equation: $ c_1\begin{bmatrix} 2 \\ 1 \end{bmatrix} + c_2 \begin{bmatrix} 3 \\ 2 \end{bmatrix} + c_3 \begin{bmatrix} 1 \\2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0\end{bmatrix} $ And, needless to say, I get an under-determined system of equations below: $$2c_1 + 3c_2 + c3 = 0$$ $$c_1 + 2c_2 + 2c_3 = 0$$ And after solving, I get this: $$c_1 = 4c_3$$ $$c_2 = -3c_3$$ So, $c_3$ is the free variable. Assuming $c_3$ is non zero, the vectors are linearly dependent. If it is zero, vectors are linearly dependent. How can it be that a free variable decides whether vectors are linearly dependent or not ? Shouldn't it a 100% yes or no answer that does not fluctuate depending on values of constants? AI: The answer does not fluctuate; you’ve misunderstood the definition of linear independence. A set $\{v_1,\dots,v_n\}$ of vectors is linearly dependent if and only if there is at least one set of coefficients $\{c_1,\dots,c_n\}$ such that $$c_1v_1+\ldots c_nv_n=0$$ and at least one of the coefficients is non-zero. It is linearly independent if and only if it is not linearly dependent. Your calculations show that $4v_1-3v_2+v_3=0$, with coefficients $4,-3$, and $1$; it’s certainly true that at least one of these is non-zero(!), so your set of vectors is linearly dependent. Note that it’s always true that $0v_1+\ldots 0v_n=0$; that’s never at issue and does not make the set of vectors linearly dependent. The question is whether there is any other set of coefficients that makes the sum $0$. If there is, the set of vectors is linearly dependent; if not, it’s linearly independent.
H: do you need a Noether ring for Noetherian Theorem? just wondering if a Noetherian ring has any relation to the conservation law of Noether's Theorem? I thought I read the universal enveloping algebra can fall under a Noetherian ring, and was wondering if a Noetherian ring implies Noether's Theorem? AI: They are just both eponymously named after the same person, Emmy Noether. Noether's theorem is couched mostly in terms of mathematical physics, and is a statement about symmetries on a space evolving according to some Lagrangian. While a Noetherian ring might be found while doing Lagrangian mechanics, they are not vital to the statement of Noether's theorem. There is a nifty little book by Neuenschwander called Emmy Noether's Wonderful Theorem which I found to be very accessable. By the way, when seeing other things named "Noether," you might want to be on your guard: Emmy's father was a productive mathematician as well as her :) See this, for example. There is a footnote in T.Y. Lam's Lectures on Modules and rings about the Lasker-Noether theorem on primary decomposition of modules (p 102): The Lasker-Noether Theorem was originally named after the chess master and mathematician Emanuel Lasker and Emmy's father, Max Noether. The version of this theorem for a commutative Noetherian ring should perhaps be more appropriately called the "Lasker-Noether-Noether Theorem". Incidentally, Noether herself never knew that the rings satisfying her Endlichkeitsbedingung ["finiteness condition," referring to the ascending chain condition] were to be christened Noetherian rings. This term was coined by Claude Chevalley only in 1943; Emmy died in 1935...
H: Minimum number of k length paths over n vertices (excuse my lack of Math Theory) I have $N$ number of vertices and can make paths of up to $K$ length. How do I figure out the minimum number of paths to form the complete graph of $N$ vertices. What is a complete graph? A complete graph is $N$ vertices that each vertex had $N-1$ connections (all vertices connect to all other vertices). if you are on any vertex of a complete graph, where (to how many vertices) can you go in one step? If it's complete, there is no need to transverse to another vertex. Since all vertices can connect to all other vertices each path between them has a value of one. In other words $N$ vertices connected is a path of $N-1$ length. if you have a path, can you repeat vertices? Why? Maybe you meant repeat path lengths? It's not issue in my environment as I can transverse the same path lengths any number of times without any penalty, but in doing so still uses a length of $K$. In Reality I'm a programmer so I actually have to come up with a way to do this for a real world solution. I would start with $N$ vertices each with $0$ connections (paths to each vertex) and the value of $K$. A solution I thought of would look like: In a repeating looping (until $N$ vertices all had $N-1$ connections) I would first choose a vertex with the least number of connections, I'll call it vertex $A$. Next I would find the next vertex with the least number of connections where this vertex has no previous connection with, I'll call it vertex $B$. Create a path between $A$ and $B$. In a secondary loop (of $K-1$ times): Select $A$ or $B$ whichever has the least number of connections as $X$. Find $Y$ where $Y$ has not been connected to $X$. Create a path Between $X$ and $Y$. $Y$ replaces (the selected $A$ or $B$). If $Y$ is not found, find any vertex $Y$ where $Y$'s connections is less than $N-1$) Create a duplicate path between $X$ and $Y$. $Y$ replaces (the selected $A$ or $B$). If $Y$ is not found, we are done. End of second loop and first loop. This will most likely work, however I'm sure there has to be a formula to give me an actual value. AI: As there are $\frac{(N-1)(N-2)}{2}$ edges in the complete graph and each of your paths can cover $K$ edges, the number $m$ of paths must be $\ge \frac{(N-1)(N-2)}{2K}$. If $N$ is odd, this is easily achieved by an Eulerian path cut into pieces of length $K$. If $N$ is even, a similar aproach is possible.
H: Realizing groups as symmetry groups We're supposed to think of (non-Abelian) groups as groups of symmetries of some object. Sometimes it isn't obvious what this object is. For example, the fundamental group of a topological space acts by symmetries on the universal cover. Does anyone have any examples of (non-Abelian) groups which aren't defined as the group of symmetries of something but turn out to be the group of symmetries of some non-obvious thing? Edit: In response to Qiaochu's comment I have edited the question appropriately. AI: This is an abelian example, but: the center $Z(G)$ of a group $G$ turns out to have a non-obvious interpretation as the group of symmetries of something. Namely, it's the group of natural automorphisms of the identity functor $\text{id}_G$, thinking of $G$ as a category with one object. More generally one can define the center of an object in a $2$-category and this reproduces a few interesting constructions, e.g. the second homotopy group $\pi_2$; see this blog post for details.
H: Error in understanding the theorem about the invertibility of an element(coset) of a quotient ring There's a theorem in Abstract Algebra which states that: An element of a quotient ring $\mathbb{Z}/\langle n \rangle$ or $\mathbb{Z_n}$ that is a coset $\overline{a}$ is invertible iff $a$ and $n$ are relatively prime. I'm having problem understanding this theorem. My confusion is: can't there be situations where $a$ and/or $n$ are not primes but $\overline{a}$ is invertible. I know I'm wrong but I like to know where I'm wrong. Suppose there's an ideal of $\mathbb{Z}$ which is $\langle 6 \rangle$ Now here $n$ which is $6$ is not prime. An element(one of the coset) of quotient ring $\mathbb{Z_6}$ is: $$ \overline{4} = \langle 6 \rangle + 4 = \{ \cdots, -8, -2, 4, 10, 16, \cdots \} $$ Here take a number from this set: Say $4$ but $4$ is invertible in the sense that $4 - 4 = 0$ so it's inverse is $-4$ and $4$ is not prime. Why's this invertible? Can anyone kindly tell me the error in my thought process? AI: can't there be situations where $a$ and/or $n$ are not primes but $\overline a$ is invertible. Relatively prime does not mean that the numbers themselves are prime. Integers $a$ and $n$ are said to be relatively prime if their greatest common divisor is $1$. Also, the type of invertibility that the theorem you quote is talking about is multiplicative invertibility. It's true of every number that $a - a = 0$, but it's not always true that there's a $b$ such that $ab = 1 \pmod n$. That happens if and only if $a$ and $n$ are relatively prime.
H: Given $\Sigma a_n$ diverges show that $\Sigma \frac{a_n}{1+a_n}$ diverges. Intuitively speaking, I first thought that if the series $\Sigma a_n$ is divergent then $$\lim_{n \to \infty} a_n \ne 0$$ therefore it was clear that $\Sigma \frac{a_n}{1+a_n} $ would be divergent, but when I thought about it there are cases where the limit of the sequence does approach to $0$ and yet diverge, like the harmonic series. Then I tried to go with since the sequence diverges, the series is not Cauchy (I m not even 100% sure if this is true but I tried) $$|\sum_{i = m}^{n} a_n| \gt \epsilon$$ and derive the other series to not be Cauchy as well, only to not being able to reach. I appreciate all the help. AI: If $(a_n)$ is a non-negative sequence, as commented by Panda and David Mitra, the statement is true. This is because, if $\sum_n\frac{a_n}{1+a_n}$ converges, then $\lim\limits_{n\to\infty} a_n=0$, so by comparison test, $\sum_n a_n$ converges, a contradiction. In general, the statement is false. For example, as a similar construction here, let $b_n=\frac{(-1)^n}{\sqrt{n}}$ and let $a_n=\frac{b_n}{1-b_n}$. Then $b_n=\frac{a_n}{1+a_n}$ and $$a_n=b_n+b_n^2+\frac{b_n^3}{1-b_n}.\tag{1}$$ It is easy to see that $\sum_n b_n$ is convergent, $\sum_n b_n^2$ is divergent and $\sum_n \frac{b_n^3}{1-b_n}$ is convergent( because $b_n\to 0$ and $\sum_n |b_n|^3$ is convergent), so from $(1)$ we know that $\sum_n a_n$ is divergent.
H: Find $f^{(1001)}(0)$ I am to find the value in 0 of 1001th derivative of the function $$f(x) = \frac{1}{2+3x^2}$$ How should I approach this kind of problem? I tried something like : $$\frac{1}{2+3x^2} = \frac{1}{2}\cdot\frac{1}{1-(-\frac{3}{2}x^2)}= \frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{3x^2}{2}\right)^n$$ and compare whats next to $x^{1001}$ in this sum and in MacLaurin series but damn, we've got only even powers of x here. How should I do that? AI: Just note that the Taylor series of an even function has only terms of even powers of $x$ and odd powers of $x$ if the function is odd. So, without computing the series you should know the answer. So, you need only to see the function is even or odd.
H: Dividing factorials is always integer Is there a simple way to show that $$n!\over r!(n-r)!$$ is always an integer? AI: I'm guessing you would like a noncombinatorial proof, as it ${n \choose k}$ is used to count and therefore must be an integer. If that's the case, here's an idea: Hint: $$\frac{n!}{r!(n-r)!} = \frac{n\cdot(n-1)\cdot(n-2)\cdot(n-r+1)}{r!}$$ The numerator is a product of $r$ consecutive integers. Can you prove now, using induction, that the product of $r$ consecutive integers is always divisible by $r!$?
H: find the dimension of vector subspace $S_p$ of $M_3(\mathbb{R})$ I need to find the dimension of vector subspace $S_p$ of $M_3(\mathbb{R})$ we have fixed a matrix $P$ which is singular and $S_p=\{X: PX=0\}$ I defoned a linear map from $M_3(\mathbb{R})\to M_3(\mathbb{R}), X\to PX$ but not able to get the $\dim Image$, I mean I see the map is not onto as $P$ is singular, could any one tell me other way to find the dimension? actually $P$ was given a specific matrix which I am going to write. I wanted to apply rank nulity theorem. $P=\begin{pmatrix}1&0&-1\\0&1&0\\1&1&-1\end{pmatrix}$ $PX=\begin{pmatrix}1&0&-1\\0&1&0\\1&1&-1\end{pmatrix}\times \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}=\begin{pmatrix}a_{11}-a_{13}&a_{12}-a_{32}&a_{13}-a_{33}\\a_{21}&a_{22}&a_{23}\\a_{11}+a_{21}-a_{31}&a_{12}+a_{22}-a_{32}&a_{13}+a_{23}-a_{33}\end{pmatrix}$ AI: Writing the comments into an answer: We have $S_P=\{X|PX=0\}$. Write $X$ with entries $x_{ij}$. Then $PX=0$ leads to $x_{2i}=0$ and $x_{1j}=x_{3j}$. So you have that the dimension of the space of solutions is $3$, which is the same as the dimension of $S_P$.
H: any subgroup of $(\mathbb{Q},+)$ Any subgroup of $(\mathbb{Q},+)$ is _______ cyclic and finitely generated but not abelian and normal, cyclic and abelian but not finitely generated and normal, abelian and normal but not cyclic and finitely generated, or finitely generated and normal but not cyclic and abelian. I know that any finitely generated subgroup of ($\mathbb{Q},+)$ is cyclic and hence abelian and hence normal, $(\mathbb{Z},+)$ is cyclic, abelian, generated by $\{1,-1\}$ and of course normal. But I am not able to figure out or tackle the above statements separately. AI: That's a very strangely worded question, but I think it's probably meant to be a fill-in the blank multiple choice question which would more intelligibly be stated as: Every subgroup of $(\mathbb{Q}, +)$ is ____________________. A) cyclic and finitely generated, but not necessarily abelian or normal. B) cyclic and abelian, but not necessarily finitely-generated or normal. C) abelian and normal, but not necessarily cyclic or finitely-generated. D) finitely generated and normal, but not necessarily cyclic or abelian. Since $(\mathbb{Q}, +)$ is an abelian group, every subgroup is obviously abelian and normal, which rules out A, B, and D, so the answer is C. Of course you want to actually be able to put your hands on a non-finitely generated subgroup of $(\mathbb{Q}, +)$, so as anon says in the comments, consider the additive group of $\mathbb{Z}[\frac{1}{2}]$, i.e. the subgroup of $(\mathbb{Q}, +)$ consisting of elements where the denominator is a power of 2. (Note that a finitely-generated subgroup of $(\mathbb{Q}, +)$ has an upper bound on its denominators, and is in fact cyclic.)
H: How to calculate surface area of a curved plane? could anyone explain how to calculate the surface area of a curved plane? I am trying to calculate the surface area of a "vaulted" ceiling that is 24' long, 7' wide, and the height of the curve is 4' at the mid-point so that I can figure out how much paint I need to buy. Here is an illustration: If it were a simple flat ceiling, I would just multiply length x width, but that's not what I have here and a google search is leaving me empty-handed. Any ideas? Thank you very much! -Neal AI: If I understand your description correctly, we can figure out the area as follows: A rough estimate (taking the width as 8 instead of 7, which gives a half-circle) gives $\pi r l = 96 \pi \approx 300$. $\tan \theta = \frac{4}{3.5} $, $r \sin \theta = \frac{\sqrt{4^2+(3.5)^2}}{2}$. Since $\sin \theta = \frac{4}{\sqrt{4^2+(3.5)^2}} $, we have $r = \frac{4^2+(3.5)^2}{8} = \frac{113}{32}$. $\theta = \arctan \frac{4}{3.5} \approx 0.85917$. Hence the length of the arc is $r (4 \theta) = (\frac{113}{32}) 4 \arctan \frac{4}{3.5} \approx 12.034$. Hence the required area is $\approx 24 \cdot 12.034 = 288.82$. This is consistent with the rough estimate.
H: An operator on $H\times H$, with $H$ Hilbert Let $(H, \langle \cdot,\cdot\rangle_H)$ a Hilbert complex space and consider $H\times H$ with the inner product $$\langle (u,v),(z,w)\rangle_{H\times H}\ =\ \langle u,z\rangle_H + \langle v,w\rangle_H.$$ Let, $A\in\mathcal{L}(H,H)$ and define the operator $B:H\times H\rightarrow H\times H$ by $$B((u,v))\ :=\ (iA(v),\ -iA^*(u)),\quad \forall\ (u,v)\in H\times H.$$ Show that $\|B\| = \|A\|$ and $B$ is self-adjoint. I was working in that problem, and I have already proved that $B\in\mathcal{L}(H\times H, H\times H)$ and $\|B\| \leq \|A\|$, but I have had problems in order to prove that $\|B\| \geq \|A\|$, please I need hints. Thanks in advance. AI: So you are almost there. Just note that for every $\|v \|=1$, we have $\|(0,v)\|=1$ whence $$ \|B\|\geq \|B(0,v)\|=\|iAv\|=\|Av\|. $$ Taking the sup over $\|v\|=1$ yields $\|B\|\geq \|A\|$ as desired. To prove that $B$ is self-adjoint $$ ((u,v),B(u',v'))=(u,iAv')+(v,-iA^*u')=((iA)^*u,v')+((-iA^*)^*v,u') $$ $$ =(-iA^*u,v')+(iAv,u')=(B(u,v),(u',v')). $$ Here is an outline of how this would be done in Operator Algebras, using $2\times 2$ matrices over $B(H)$. It is indeed a fundamental fact that $B(H\times H)$ can be identified with $M_2(B(H))$. Note that $H\times H$ is most often denoted by $H\oplus H$. And note that the vectors of $H$ disappear from the argument. We can identify $B$ with the following $2\times 2$ matrix $$ B= \pmatrix{0&iA\\-iA^*&0} $$ with respect to the orthogonal decomposition $H\times H=H\times \{0\}\oplus \{0\}\times H$. Therefore $$ B^*=\pmatrix{0&(-iA^*)^*\\(iA)^*&0}=B\quad\mbox{and}\quad B^*B=B^2=\pmatrix{AA^*&0\\0&A^*A}. $$ So $B$ is self-adjoint. Now in general, for a block diagonal operator over an orthogonal decomposition, we have $$ \Big\|\pmatrix{S&0\\0&T}\Big\|=\max \{\|S\|,\|T\|\}. $$ Now recall the $C^*$-norm property of the operator norm of $B(H)$: $\|A^*A\|=\|AA^*\|=\|A\|^2$. Hence $$ \|B\|^2=\|B^*B\|=\max\{\|AA^*\|,\|A^*A\|\}=\|A\|^2\quad\Rightarrow\quad \|B\|=\|A\|. $$
H: How to understand ideals in $F$, which is a finite commutative ring with $1$ I do not fully understand about ideals in finite rings, and I have to choose the correct answer to the following: If $F$ is a finite commutative ring with $1,$ then (i) Each prime ideal is a maximal ideal. (ii) $F$ has no nontrivial maximal ideal. (iii) $F$ may have a prime ideal which is not maximal. (iv) $F$ is a field. I know that $(\mathbb{Z}_6,+,.)$ is a finite commutative ring with $1$ which is not a field so (iv) is out. I have no idea about other options as I have never seen an ideal of a finite ring. Thank you for your help. AI: Your example has nontrivial maximal ideals, so (ii) is also out. Now recall that an ideal $I$ of the unital ring $F$ is prime if and only if $F/I$ is a domain, and maximal if and only if $F/I$ is a field. Now apply something you should know about finite domains.
H: Isomorphism including a tensor product I whould like to konw why the following map is an isomorphism : $ K[X_1,...,X_n]/(f_1,...,f_r) \otimes K [Y_1,...,Y_n]/(g_1,...g_k) \simeq K[X_1,...,X_n, Y_1,...,Y_m] / (f_1,...,f_r,g_1,...g_k) $ which $ K $ is an algebraically closed field. Here is my suggestion : We put : $ \varphi \ : \ K[X_1,...,X_n]/(f_1,...,f_r) \times K [Y_1,...,Y_n]/(g_1,...g_k) \to K[X_1 , ... , X_n, Y_1 , ... , Y_m ] / (f_1 , ..., f_r , g_1 , ... , g_k) $ défined by : $ \varphi (P,Q) = PQ $ which is bilinear. So, according to the universal property of the tensor product, we obtain the the $ K $ - linear map : $ f : K[X_1,...,X_n]/(f_1,...,f_r) \otimes K [Y_1,...,Y_n]/(g_1,...g_k) \simeq K[X_1,...,X_n, Y_1,...,Y_m] / (f_1,...,f_r,g_1,...g_k) $ so the question : why $ f $ is an isomorphism ? How can we show it. And sorry about my english language, i'm not a british men, sorry. Thanks a lot. AI: Hint: One way to show that $f$ is an isomorphism is to define it's inverse. Send $x_i$ to $x_i \otimes 1$ and send $y_i$ to $1 \otimes y_i$. You'll have to check that this gives a well defined map and that it's the inverse to $f$.
H: Solving equation with absolute value signs Can someone see why there is only get one solution when solving following equation in this way: The equation $|x+1|+|2x-3|=|x-5| $ $$|x+1|+|2x-3|=|x-5| $$ $$\pm (x+1) \pm(2x-3)=\pm(x-5)$$ $$\pm x \pm 1 \pm 2x \mp 3 = \pm x \mp 5$$ $$\pm x \pm 2x \mp x \pm 1 \pm 5 \mp 3=0 $$ $$\pm 2x \pm 6 \mp 3 = 0$$ $$\pm 2x \pm 3=0$$ $$\pm 2x=\mp 3$$ $$x=\frac{\mp 3}{\pm 2} = -\frac{3}{2}$$ There should be another solution as well, $\frac{7}{4}$ by constructing two graphs and finding the intercepts. AI: You should try to find solutions over the intervals $]-\infty,-1]$, $[-1,3/2]$, $[3/2,5]$ and $[5,\infty[$ instead, because over each of those intervals you know the sign of the absolute value and you will have a linear equation to solve. For instance, over the interval $[-1,3/2]$, $|x+1| = x+1$, $|2x-3| = -(2x-3)$ and $|x-5| = -(x-5)$. This gives you the equation $$ x+1 -(2x-3) = -(x-5) \quad \Longrightarrow \quad x+1 = 2x-3 - (x-5) = x+2 $$ which has no solution, hence there is no solution over the interval $[-1,3/2]$. Work out all four cases similarly. You need to worry that the solution that you find is actually in the interval you are working on though. Hope that helps,
H: Area moment of Inertia and Center of Gravity Can someone please explain to me once and for all, why is the moment of inertia of a body $A$ Is calculated as: $$I_x = \int_A y^2 dA ,\quad I_y= \int_A x^2 dA .$$ I searched a lot google for a summary and derivation, but couldn't find any good one that explains in detail the derivation of this formula. Do you have a reference for this fact? As for the center of gravity , (using double integrals) do you have a good reference for it? Thanks !! AI: http://en.m.wikipedia.org/wiki/Moment_of_inertia the section where it mentions calculating the moment of inertia give a short derivation based on energy concerns why the moment of inertia is given by $$ \sum mr^2 $$. This is a discrete sum which is then given a continuous definition in terms of integration. For example an object rotating about the y axis ( if you remember your cylindrical shells method) has a radial distance x, so to sum up terms of the form $$ mx^2 $$, we treat each tiny volume element as a term $$ x^2 dm = x^2 \rho dA $$ summing up yields $$\int \int x^2 dA$$ for an object of constant unit density. Interchange the x and y axis for the other result.