text
stringlengths
83
79.5k
H: Solving systems of equations using matrices I'm teaching myself how to solve systems of equations using matrices and row operations. I get the basic mechanics of it (legal operations, etc.), but it seems like it's kind-of a crapshoot deciding where to begin, and choosing the "wrong" operation to start with can lead to a really difficult problem. My question is: Are there any rules of thumb for deciding which operations to begin with and how to proceed? For instance, given the system:$$\begin{cases}2x - 9y - z = -23\\ x + 9y - 5z= 2\\ 3x + y + z = 24\end{cases}$$ What row operation would you begin with, and where would you go from there? I'm not asking you to do it for me, but for insight into your thought process regarding how to proceed. AI: Try following these steps (practice a bunch of them and you start seeing approaches, but there are no hard and fast rules on what is optimal). Swap R1 and R3 Subtract (1/3) R1 from R2 Multiply R2 by (3/2) Multiply R3 by -1 Add (2/3) R1 to R3 Multiply R3 by 3 Swap R2 with R3 Subtract (13/29) R2 from R3 Multiply R3 by (-29/297) Subtract 5*R3 from R2 Subtract R3 from R1 Subtract (1/29) R2 from R1 Divide R1 by 3 Divide R2 by 29 You will end up with: $$ \left[\begin{array}{ccc|c} 1 & 0 & 0 & 5\\0 & 1 & 0 & 3\\0 & 0 & 1 & 6 \end{array}\right] $$
H: continiouty of maping from set back into itself. Let $f: [a,b] \to [a,b]$ be continuous. Show that the equation $f(x) = x$ has at least one solution in $[a,b]$. Firstly im going to assume $x \in [a,b]$ thus a is the min and b is that max or vice versa assume the first. thus x >a and $x <b$ so we can divide the interval up into $(A=[a,x) ) \cup (B=[x,b])$ then i am going to assume that f maps the interval back onto itself but doesn't not include $f(x)=x$. thus $f(A) \cap f_{closure}(B) = \phi $ and $f(B) \cap f_{closure}(A) = \phi $ thus f is not connected so clearly $f(x)=x$ im fairly certain im supposed to use the intermediate value theorem but i don't remember it. this work? AI: If there are two points $x,y$ s.t. $f(x)\geq x$ and $f(y)\leq y$, then the intermediate value theorem will give a solution. Therefore, assume that $f(x)>x$ for all $x\in (a,b)$. Now think about what happens when $x\mapsto b$.
H: Partial fraction decomposition of $\frac{x-4}{(x-2)(x-3)}$ I'm trying to do the partial fraction decomposition of the following rational expression:$$\frac{x-4}{(x-2)(x-3)}$$ Here are the steps I preformed: \begin{align*} x-4 & = \frac{A}{x-3} + \frac{B}{x-2}\\ x-4 & = Ax - 3A + Bx - 2A\\ x-4 & = x(A+B) - (3A + 2B) \end{align*} Form a system of equations by equating the coefficients of like powers of $x$:$$\begin{cases}A+B = 1\\ -3A - 2B = 0\end{cases}$$ Solving the system by substitution: \begin{align*} A & = -B+ 1\\ -3(-B+1) -2B & = 0\\ 3B -3 -2B & = 0\\ 3B - 2B & = 3\\ B & = 3\\ A + 3 & = 1\\ A & = - 2 \end{align*} So, my final decomposition is$$\frac{-2}{x-2} + \frac{3}{x-3} .$$ However, the answer in the back of my packet is$$\frac{2}{x-2} + \frac{-1}{x-3} .$$ What have I done wrong? AI: The Answer in your book is correct and your calculation is wrong. I am writting correct one for you now. $$\dfrac{x-4}{(x-2)(x-3)}=\dfrac{A}{x-2}+\dfrac{B}{x-3}=\dfrac{Ax-3A+Bx-2B}{(x-2)(x-3)}=\dfrac{(A+B)x+(-3A-2B)}{(x-2)(x-3)}$$ so we should have $$A+B=1,-3A-2B=-4$$ By multiping first equation by 2 and adding result by second one you will have $-A=-2$ so $A=2$ and by $A+B=1$ we have $B=-1$
H: Finding the limit for functions with two variables I know that when we have a limit of a function with $2$ variables, the limit must be the same, regardless of the path we take. So this is useful for proving that a limit doesn not exist. But when you've tried this method for different "paths" (e.g., $(x,0),(0,y),(x,x),(y,y)$, etc...) and you think that the limit does exist, how you do show it? For example here is a question from my textbook: Evaluate the limit: $$\lim_{\large{(x,y) \to (0,0)}} \dfrac{xy \sin(xy)}{x^2+y^2}$$ The answer is supposed to be $0$ but I don't see how you can prove that it is $0$ for any direction you approach $(0,0)$ from. Can someone please help me? Thanks. AI: If you change to polar coordinates, you get $$ \lim_{r \rightarrow 0} \sin(\theta) \cos(\theta) \sin(r^2 \sin(\theta) \cos(\theta)). $$ It might look a bit confusing, but the essential thing is that $\sin(x)$ goes to zero as $x$ goes to zero, and that $\sin(\theta)$ and $\cos(\theta)$ are at most 1.
H: Is there a formula to calculate the minimum height of an n-nary tree with L leaves? I'm trying to figure out if there is a way to calculate the minimum height of an n-nary tree with L leaves. Is there such a formula? AI: I found it: $$ \text{minimum height}= \lceil \log_n L \rceil$$
H: Solving ODE using frobenius method. 3 coefficients I'm trying to learn frobenius method by solving some problems (ODEs). For example: $$xy''+(2x+1)y'+(x+1)y=0$$ Let $y=\sum\limits_{n=0}^\infty a_nx^{n+r}$. Then, I took derivatives and put into the equation: $$\sum\limits_{n=0}^\infty a_n(n+r)^2x^{n+r-1}+2\sum\limits_{n=0}^\infty a_n(n+r+1)x^{n+r}+\sum\limits_{n=0}^\infty a_nx^{n+r+1}=0$$ After I shifted to make their orders same: $$\sum\limits_{k=-2}^\infty a_{k+2}(k+r+2)^2x^{k+r+1}+2\sum\limits_{k=-1}^\infty a_{k+1}(k+r+2)x^{k+r+1}+\sum\limits_{k=0}^\infty a_kx^{k+r+1}$$ And if I leave first $k=-2, k=-1$ parts, I can find relationship among 3 coefficient: $$\sum\limits_{k=0}^\infty [x^{k+r+1}(a_{k+2}(k+r+2)^2+a_{k+1}(k+r+2)+a_k)]=0$$ Now, here I could find relationship with $a_{k+2},a_{k+1},a_k$. But, in this method we should find proportionality between 2 coefficients, not 3. For example, this: Frobenius Method to solve $x(1 - x)y'' - 3xy' - y = 0$ Can you, please, suggest a solution? AI: You state: After I shifted to make their orders same: $$ \sum_{k = -2}^\infty a_{k + 2}(k + r + 2)^2 x^{k + r + 1} + 2 \sum_{k = -1}^\infty a_{k + 1}(k + r + 2)x^{k + r + 1}+\sum_{k = 0}^\infty a_k x^{k + r + 1} $$ What you do is the following. Given that $$ \sum_{k = 0}^\infty a_k (k + r)^2 x^{k + r - 1} + \sum_{k = 0}^\infty a_k(2 k + 2 r + 1)x^{k + r} + \sum_{k = 0}^\infty a_k x^{k + r + 1} $$ (note that your second sum was incorrectly calculated) you need to separate the necessary terms of the sums in order to group the powers of $x$ correctly, i.e: \begin{multline} \sum_{n = 0}^\infty a_n (n + r)^2 x^{n + r - 1} + 2 \sum_{n = 0}^\infty a_n (n + r + 1) x^{n + r} + \sum_{n = 0}^\infty a_n x^{n + r + 1} = \\ a_0 r^2 x^{r-1} + a_1 (r + 1)^2 x^r + \sum_{n=2}^\infty a_n (n + r)^2 x^{n + r - 1} + (2 r + 1) a_0 x^r + \\ \sum\limits_{n = 1}^\infty a_n (2 n + 2 r+ 1)x^{n + r} +\sum_{n = 0}^\infty a_n x^{n + r + 1} = 0 \end{multline} Regrouping orders, you have \begin{multline} a_0 r^2 x^{r-1} + [a_1 (r + 1) + a_0 (2 r + 1)] x^r + \\ \sum_{k = 0}^\infty \left\{ a_{k + 2}(k + r + 2)^2 + 2 a_{k + 1} (k + r + 2) + a_k \right\} x^{k + r + 1} = 0 \end{multline} Each power of $x$ needs to vanish, hence $r^2 = 0$. This is the indicial polynomial (details here). This means that $r = 0$ and \begin{align} a_1 + a_0 &= 0 \\ a_{k + 2}(k + 2)^2 + 2 a_{k + 1} (2 k + 3) + a_k &= 0 \end{align} which closes the recurrence relation. The first tree terms are \begin{align} a_1 &= -a_0\\ a_2 &= \frac{1}{2!}a_0\\ a_3 &= -\frac{1}{3!}a_0 \end{align} and it's clear that a relationship is forming. By induction, the whole solution can be computed. Note that, assuming that $y$ is somehow well behaved, for $x \sim 0$, $$ x y'' + (2x + 1) y' + (x + 1) y = 0 \quad \sim \quad y' + y = 0. $$ Proposing the anzats $y(x) = e^{-x} z(x)$ and substituting in the original ode, $$ x y'' + (2x + 1) y' + (x + 1) y = e^{-x}\left(x z'' + z'\right) = 0, $$ and it's easily verified that $z = c_1 \log x + c_2$. Hence $$ y(x) = e^{-x}\left(c_1 \log x + c_2\right) $$ Cool trick ha?
H: Small question about derivative how to derive $\int_0^1 G(t,s) e(s)ds$ with respect to $t$ Where $G(t,s)$ is a Green function and $e:(0,1)\rightarrow \mathbb{R}$ continuous and $e\in L(0,1)$ Please help me Thank you AI: You can commute the derivative and the integral operators in this case. See here.
H: Geometric proof Let the three sides of a triangle be $a,b$ and $c$. If the equation $$a^2+b^2+c^2=ab +bc+ac$$ holds true, then the triangle is an equilateral triangle. How do we prove this? An answer or even the slightest hint will be appreciated. AI: Note that $$a^2 + b^2 + c^2 = ab + bc + ca \implies (a-b)^2 + (b-c)^2 + (c-a)^2 = 0$$ I trust you can finish it off from here.
H: What's a better way to integrate this? $$ \int \frac{1}{x^2 + z^2} dx $$ I tried substitution and also by parts. By parts is getting messy and I am not sure if I am getting the right answer. I am trying to figure out an easier way or the proper way to integrate this. Could someone please show me? AI: The right substitution will do the job for you. Set $x = z \tan(t)$. We then have $dx = z \sec^2(t) dt$. Hence, $$\int \dfrac{dx}{x^2+z^2} = \int \dfrac{z \sec^2(t) dt}{z^2(\tan^2(t)+1)} = \dfrac1z \int \dfrac{\sec^2(t) dt}{\sec^2(t)} = \dfrac{t}z + \text{const} = \dfrac{\arctan(x/z)}z + \text{const}$$ We made use of the following two facts in the above derivation. $\dfrac{d(\tan(t))}{dt} = \sec^2(t)$ $\tan^2(t)+1 = \sec^2(t)$
H: Definite integration of a trigonometric function How to integrate $$\int_0^{\pi/2}\!\dfrac{2a \sin^2 x}{a^2 \sin^2 x +b^2 \cos^2 x}\,dx $$ my first step is $$\frac{2}{a} \int_0^{\pi/2}\!\dfrac{a^2 \sin^2 x}{a^2 +(b^2 - a^2) \cos^2 x}\, dx $$ I would kind of want to do some sort of $u=\cos x$ substitution, to get at $\arctan u $ but no idea what to do with the sine in the numerator. AI: $$\displaystyle \int_0^{\dfrac{\pi}{2}} \dfrac{2a \sin^2 x}{a^2\sin^2 x +b^2 cos^2 x} dx$$ $$\displaystyle 2a\int_0^{\dfrac{\pi}{2}} \dfrac{\csc^2x }{a^2\csc^2 x +b^2 \cot^2 x \csc^2 x} dx$$ $$\displaystyle 2a\int_0^{\dfrac{\pi}{2}} \dfrac{\csc^2x }{(\cot^2 x+1)(a^2+b^2\cot^2 x)} dx$$ Let $u=\cot x$ $$\displaystyle 2a\int_0^{\infty} \dfrac{du }{(u^2 +1)(a^2+b^2u^2)} $$ $$\displaystyle 2a\int_0^{\infty} \left(\dfrac{1}{(u^2+1)(a^2-b^2)}-\dfrac{b^2 }{(a^2 -b^2)(a^2+b^2u^2)}\right) du $$ $$\displaystyle \frac{2a}{a^2-b^2}\int_0^{\infty} \left(\dfrac{1}{(u^2+1)}-\dfrac{b^2 }{(a^2+b^2u^2)}\right) du $$ And I hope you'll take it from here...
H: double integrals and iterated integrals Give an example (if any) for a non-integrable function $f:\mathbb{R\times R}$ $\to$ $\mathbb{R}$ with domain in $[0,1]^2$ such that both iterated integrals exists(i.e. in both order of integration). Here is what I have got: $$ f(x,y) = \begin{cases} e^{-xy}\sin x \sin y, & \text{if }x,y \geq 0 \\ 0, & \text{otherwise } \end{cases} $$ Does this function work for my case? I found this in :http://www.mathnet.or.kr/mathnet/kms_tex/80630.pdf. It says that iterated integrals exist but not double integrals. I am not sure if this implies Riemann integral does not exist. AI: Consider the function $$ f(x,y)= \frac{x^2-y^2}{(x^2+y^2)^2}. $$ Now, if you evaluate the integral $$ \int_{0}^{1}\int_{0}^{1}f(x,y)dydx = \frac{\pi}{4},$$ and if you consider the other order, you get $$ \int_{0}^{1}\int_{0}^{1}f(x,y)dxdy = -\frac{\pi}{4}. $$ So, the iterated integrals exist, but the double integral does not.
H: What is the general equation of a cubic polynomial? I had this question: "Find the cubic equation whose roots are the the squares of that of $x^3 + 2x + 1 = 0$" and I kind of solved it. In that my answer was $x^3 - 4x^2 + 4x + 1$, but it was actually $x^3 + 4x^2 + 4x - 1 = 0$. I took the general equation of a cubic equation, which was: $x^3 +bx^2/a + cx/a + d/a$. Through simultaneous equations, I found what $b/a, c/a, d/a$ should equate to for my unknown cubic polynomial. Am I supposed to make $b/a, c/a, d/a$ all positive, then substitute it into the general formula? Any help would be greatly appreciated, thanks. AI: Let $a$ be a root of $x^3+2x+1=0$ and $b=a^2$ be a root of the required equation So, $a^3+2a+1=0\implies a\cdot b+2a+1=0\implies a=-\frac1{b+2}$ As $a$ be a root of $x^3+2x+1=0$, put this value of $a$ in $x^3+2x+1=0$ On simplification, I get $(b+2)^3-2(b+2)^2-1=0\iff b^3+4b^2+4b-1=0$ So, the required equation will be $y^3+4y^2+4y-1=0$
H: $P(|X|\ge\lambda a)\ge (1-\lambda)^2a^2$ for $0\le \lambda \le 1$ If $E(X^2)=1$ and $E(|X|)\ge a >0$, then $P(|X|\ge\lambda a)\ge (1-\lambda)^2a^2$ for $0\le \lambda \le 1$. I can see from the well known inequality $E(|X|) \le E(|X|^2)^{1/2}$ that it must be the case that $a\le 1$. But what to do next I'm not sure. AI: Note that $$a\le E(|X|)=E(|X|\cdot 1_{|X|< \lambda a})+E(|X|\cdot 1_{|X|\ge \lambda a})\le \lambda a +E(|X|\cdot 1_{|X|\ge \lambda a}).\tag{1}$$ By Cauchy-Schwarz inequality and noting that $E(X^2)=1$, we have $$E(|X|\cdot 1_{|X|\ge \lambda a})\le \big(E(X^2)\big)^{\frac{1}{2}}\cdot \big(E(1_{|X|\ge \lambda a})\big)^{\frac{1}{2}}=\big(P(|X|\ge \lambda a)\big)^{\frac{1}{2}}.\tag{2}$$ Combining $(1)$ and $(2)$, the conclusion follows.
H: A definite multiple integral $$\int_0^1\int_\sqrt[3]{x}^1 4\cos(y^4)\,\mathrm dy\,\mathrm dx$$ What I got was $\sin(1)x+\cos(x^2) dx$ and now I am stuck. I suddenly froze. Could someone help me? Haven't done calculus for a long time. AI: Note that this is a definite integral, and hence the result should be a number and not a function. First, note that the inner integral is quite tough to compute. It's not obvious what an anti-derivative of $\cos(y^4)$ should be. Therefore, it is sometimes convenient to change the order of integration, i.e. in this exercise integrate with respect to $x$ first and then with respect to $y$: $$ \int_0^1\int_\sqrt[3]{x}^1 4\cos(y^4)\,\mathrm dy\,\mathrm dx=\int_0^1\int_0^{y^3}4\cos(y^4)\,\mathrm dx\,\mathrm dy. $$ All you have to consider here is why the upper and lower bounds of the integrals are as they are. To this end, observe that in the first integral we have $0\leq x\leq 1$ and $\sqrt[3]{x}\leq y\leq 1$ which is equivalent to $0\leq y\leq 1$ and $0\leq x\leq y^3$. Now, this should be an easy task to integrate.
H: When are two vectors parallel if the vectors are $5e_1-3e_2+\alpha e_3$ and $\beta e_1 + 2e_2 + 3e_3$ When are two vectors parallel if the vectors are $$5e_1-3e_2+\alpha e_3$$ and $$\beta e_1 + 2e_2 + 3e_3$$ The alpha and beta are parameters. AI: Two vector $\vec {n_1}$ and $\vec{n_2}$ are parallel when we can write them in this way $\vec {n_1}=k\cdot \vec{n_2}\,$ where k is any scaler quantity. so given vectors are parallel when $$ \dfrac5{\beta}=\dfrac{-3}{2}=\dfrac{\alpha}{3} $$ In one of the other answer shown: $\vec {n_1}\times \vec {n_2}=\vec 0\implies \vec{n_1}||\vec{n_2}$ $\vec {n_1}\times \vec {n_2}$ can be calculated as: $\vec {n_1}\times \vec {n_2}=\begin{vmatrix} {e_1} &{e_2} & {e_3}\\ 5 & -3 & \alpha \\ \beta & 2 & 3 \end{vmatrix}$ $$\vec {n_1}\times \vec {n_2}=e_1((-3\times 3)-(2\times \alpha))-e_2((5\times 3)-(\alpha\times\beta))+e_3(5\times 2-(-3\times \beta))$$ $$\vec {n_1}\times \vec {n_2}=e_1(-9-2\alpha)-e_2(15-\alpha\beta)+e_3(10+3\beta)$$
H: Understanding weighted linear least square problem I am having difficulty in understanding about weighted linear least squares. Could anybody explain me instead of minimizing the residual sum of squares why we need to minimize the weighted sum of squares? Further, I want to know about the term weighted? Although I have gone through some wiki notes but I am not able to understand. Thank you very much for the help. AI: Take a look at a simple problem: $$ax=c\\ bx=d$$ where '$=$' is meant as an optimization goal, not as exact equality. So we want to find $x$ such that the error of both equalities is minimized in some optimal sense. If you choose a least squares criterion we get the following error function: $$\epsilon = (ax-c)^2+(bx-d)^2$$ Now we can decide that for some reason the error in the first equation is more important than the error in the second equation, so we can add a weight ($>1$) to the error component of the first equation: $$\hat{\epsilon} = w^2(ax-c)^2+(bx-d)^2$$ Minimizing $\hat{\epsilon}$ will result in a smaller error for the first equation at the expense of the error of the second equation. This is the basic idea of a weighted least squares error criterion. If you solve the original (unweighted) problem by solving $\frac{d\epsilon}{dx}=0$ you get the optimal solution: $$x_o=\frac{ac+bd}{a^2+b^2}$$ If you solve the weighted least squares problem by solving $\frac{d\hat{\epsilon}}{dx}=0$ you get $$\hat{x}_o=\frac{w^2ac+bd}{w^2a^2+b^2}$$ From this you can see that if the weight $w$ is chosen very large, the solution $\hat{x}_o$ becomes close to $\frac{c}{a}$, which is simply the exact solution of the first equation, not at all considering the second equation. Obvioulsy, by using a weight $w>1$ you give more importance to the first equation.
H: When is $T$-Alg monoidal closed? Given a category $\mathcal{V}$ and a monad $(T,\eta,\mu)$, what would be the sufficient conditions on $\mathcal{V}$ and $T$, for the category of $T$ algebras to be monoidal closed? (I'm pretty sure that Kock proved that, if $T$ has strength and is commutative, then $T$-alg is closed; can we relax that condition, or change it in anyway?) AI: You need that $\mathcal{V}$ is closed monoidal, $T$ is a monoidal monad and that certain coequalizers in $\mathcal{V}$ exist which commute with $T$. For details, see Tensors, monads and actions by Gavin Seal.
H: Closed subset of an affine variety... is it affine? Preliminaries So, first of all let me give you the definitions I'm dealing with. Let $k$ be an algebraically closed field, and $\mathbb{A}^n = k^n$. An affine variety is a closed and irreducible subset of $\mathbb{A}^n$. Here we endow $\mathbb{A}^n$ with the Zariski topology, so a closed subset is the zero set of an ideal in the polynomial ring $k[x_1,\dots,x_n]$. The irreducibility condition in the definition of affine variety requires that such an ideal is prime. A k-space is a topological space $X$ together with a sheaf of $k$-algebras $\mathscr{O}_X$. We require that $\mathscr{O}_X$ is a subsheaf of the sheaf of $k$-valued functions on $X$. An algebraic variety is an irreducible k-space $X$ such that $\exists$ an open cover $X=\bigcup_{i=1}^n U_i$ where each $U_i$ is an affine variety. Moreover we ask the diagonal $\Delta(X)$ to be closed in $X\times X$. Now we use the term affine variety also for an algebraic variety which is isomorphic as a $k$-space to an affine variety as defined above. The question Given a closed subset of an affine variety, is this subset affine? Remarks My professor claimed this to be true, and I noticed some people take this fact as granted on this website as well, so I guess this should be true! Nevertheless I don't see why such a closed subset should be irreducible. But the point is: do we really care? My naive understanding of this affine/not affine business is that for an affine variety we can write down something like a global coordinate system (the coordinate ring), while for a general algebraic variety we have just local coordinates, ie we have many affines patched together in a way that cannot be described globally with the usual tools. Hence if a variety is reducible this is not a real obstruction to being affine: we can still find a global coordinate system. The important thing is that all the components lie in the same $\mathbb{A}^n$ As you can see I'm a bit confused... some clarification about these ideas would be really appreciated! AI: This is just a matter of terminology. In both books I have to hand (Hartshorne, and Eisenbud's "Commutative algebra..."), the authors define an 'affine algebraic set' to be any subset of $\mathbb{A}^n$ given by the vanishing of polynomials, and an 'affine algebraic variety' to be an irreducible such set. What is perfectly clear (and is possibly what 'The question' really asks, given the absence of the word 'variety' at the end) is that any closed subset of an affine algebraic set is again an affine algebraic set. In practice though, the word 'variety' often seems to be used more generally, without the irreducibility requirement.
H: Zeros of a cubic polynomial with rational coefficients While discussing a related problem, one of my friends came out with a question as follows: Is it possible that a cubic polynomial $p(x) \in \Bbb{Q}[x]$ has all of its zeros to be both real and irrational? That is, can $p(x)$ be factored into the following form? $$ p(x) = a (x - \alpha_1)(x - \alpha_2)(x - \alpha_3), \quad a \in \Bbb{Q}, \ \alpha_{i} \in \Bbb{R} \setminus \Bbb{Q} $$ We struggled with this problem for a moderate time but failed to find any clue for its validity or invalidity. I guess that this is impossible, but currently I have no idea how to attack this. Can you enlighten me by showing a valid direction? AI: It is possible. For example, given a prime number $p$ and $a,b\in\mathbb Z$ with $a+b\le-2$, let $$f(x)=x^3+pax^2+pbx+p\in \mathbb Z[x].$$ Then by Eisenstein's criterion, $f$ is irreducible in $\mathbb Q[x]$, i.e. $f$ has no rational root. However, since $f(0)=p>0$ and $f(1)=1+(a+b+1)p<0$, $f$ has three distinct real root located in $(-\infty,0)$, $(0,1)$ and $(1,+\infty)$ respectively.
H: Are these ODEs equivalent? I have the following set of ordinary differential equations: \begin{equation} \left\{ \begin{array}{l} \dot{a} = f_1(a, b, c, d) \\ \dot{b} = f_2(a, b, c, d) \\ \dot{c} = f_1(c, d, a, b) \\ \dot{d} = f_2(c, d, a, b) \end{array} \right. \end{equation} where $f_1$ and $f_2$ are two functions from $\mathbb{R}^4$ to $\mathbb{R}$. Also, I have the following initial conditions: \begin{equation} \left\{ \begin{array}{l} a(0) = \alpha \\ b(0) = \beta \\ c(0) = \alpha \\ d(0) = \beta \end{array} \right. \end{equation} and hence, under some suitable conditions, I have the unique solutions $a(t)$, $b(t)$, $c(t)$ and $d(t)$. Then, I pose that: $$x(t) = \frac{a(t) + c(t)}{2} ~ \text{and} ~ y(t) = \frac{b(t) + d(t)}{2} $$ Can I prove that there exists two function, $g_1$ and $g_2$, from $\mathbb{R}^2$ to $\mathbb{R}$, such that the solution of the Cauchy problem \begin{equation} \left\{ \begin{array}{l} \dot{x} = g_1(x, y) \\ x(0) = \alpha \\ \dot{y} = g_2(x, y)\\ y(0) = \beta \end{array} \right. \end{equation} has the properties that $x(t) = a(t) = c(t)$ and $y(t) = b(t) = d(t)$? AI: If you can express $f_1(a,b,c,d) + f_1(c, d, a, b)$ and $f_2(a, b, c, d) + f_2(c, d, a, b)$ as a function of $(a+c)$ and $(b+d)$, then yes. If you can't, then no. Of course if $f_1(a, b, c, d)$ and $f_2(a, b, c, d)$ are both function of $(a+c)$ and $(b+d)$ then this condition will be satisfied. Edit: corrected by @the_candyman in a comment.
H: Do these definitions of congruences on categories have the same result in this context? Let $\mathcal{D}$ be a small category and let $A=A\left(\mathcal{D}\right)$ be its set of arrows. Define $P$ on $A$ by: $fPg\Leftrightarrow\left[f\text{ and }g\text{ are parallel}\right]$ and let $R\subseteq P$. Now have a look at equivalence relations $C$ on $A$. Let's say that: $C\in\mathcal{C}_{s}$ iff $R\subseteq C\subseteq P$ and $fCg\Rightarrow\left(h\circ f\circ k\right)C\left(h\circ g\circ k\right)$ whenever these compositions are defined; $C\in\mathcal{C}_{w}$ iff $R\subseteq C\subseteq P$ but now combined with $fCg\wedge f'Cg'\Rightarrow\left(f\circ f'\right)C\left(g\circ g'\right)$ whenever these compositions are defined. Then $P\in\mathcal{C}_{s}$ and $P\in\mathcal{C}_{w}$ so both are not empty. For $C_{s}:=\cap\mathcal{C}_{s}$ and $C_{w}:=\cap\mathcal{C}_{w}$ it is easy to verify that $C_{s}\in\mathcal{C}_{s}$ and $C_{w}\in\mathcal{C}_{w}$. My question is: Do we have $C_{w}=C_{s}$ here? It is in fact the question whether two different definitions of 'congruences' both result in the same smallest 'congruence' that contains relation $R\subseteq P$. I ask it here for small categories so that I can conveniently speak of 'relations' (small sets), but for large categories I have the same question. Mac Lane works in CWM with $C_{s}$, but is $C_{w}$ also an option? AI: They are identical. I will suppress the composition symbol for brevity and convenience. Suppose first that $C \in \mathcal C_w$, and that $f C g$. Since $h C h$ and $k C k$, we have $f C g$ implies $hf C hg$, which in turn implies $hfk C hgk$. Thus $C \in \mathcal C_s$. Suppose now that $C \in \mathcal C_s$, and that $f C g, f' C g'$. Then we have $ff' C gf'$ (take $h = \operatorname{id}, k = f'$) and $gf'Cgg'$ (take $h = g, k = \operatorname{id}$). By transitivity, $ff'Cgg'$. Thus $C \in \mathcal C_w$. Therefore, $\mathcal C_s = \mathcal C_w$, and we conclude $C_s = C_w$.
H: Class Group of $\mathbb Q(\sqrt{-35})$ As an exercise I am trying to compute the class group of $\mathbb Q(\sqrt{-35})$. We have $-35\equiv 1$ mod $4$, so the Minkowski bound is $\frac{4}{\pi}\frac12 \sqrt{35}<\frac23\cdot 6=4$. So we only need to look at the prime numbers $2$ and $3$. $-35\equiv 5$ mod $8$, so $2$ is inert. Also, $-35\equiv 1$ mod $3$, so $3$ splits, i.e. $(3)=Q\overline Q$ with $Q=(3,1+\sqrt{-35})$, $\overline Q=(3,1-\sqrt{-35})$. The ideals $Q,\overline Q$ are not principal, because there are no solutions to $x^2+35y^2=12$, i.e. no elements of norm 3. Now we know that there are at most $3$ elements (or do we?), namely $(1),Q,\overline Q$. Mathematica tells me that the class number is $2$, so $Q$ and $\overline Q$ must be in the same equivalence class and $Q^2$ has to be a principal ideal. But how can I show this? AI: Note that the ring of integers is $\mathbb Z[(1+\sqrt{-35})/2]$. If you compute $(3, 1 + \sqrt{-35})^2$, you get $$(9,3 + 3\sqrt{-35}, -34 + 2 \sqrt{-35} ) = (9, 1 + \sqrt{-35}) = ( \dfrac{1-\sqrt{-35}}{2} \dfrac{1 + \sqrt{-35}}{2}, 2 \dfrac{1+\sqrt{-35}}{2}) = ((1 + \sqrt{-35})/2 )$$ (because $\dfrac{1-\sqrt{-35}}{2}$ and $2$ are coprime, and so generate the ideal $1$). I find the computation a bit easier by phrasing the factorization of $(3)$ in the following alternative way: $(3) = (3, (1 + \sqrt{-35})/2) (3,(1- \sqrt{-35})/2)$, and $$(3,(1+\sqrt{-35})/2)^2 = (9, 3(1+\sqrt{-35})/2,(-17+\sqrt{-35})/2) = ( (1+\sqrt{-35})/2 )$$ is principal. As a consistency check, note that $ (9) = (3) (3) = Q \overline{Q} Q \overline{Q} = Q^2 \overline{Q}^2,$ but also $9 = ( (1+\sqrt{-35})/2) ( ( 1 - \sqrt{-35})/2),$ so we must have $Q^2$ equal to one of $( (1 \pm \sqrt{-35})/2).$
H: Symplectic Form Preserved by Orthogonal Transformation I'm trying to prove that the symplectic form $$\omega = d(\cos\theta) \wedge d\phi$$ is preserved by the action of $SO(3)$ on $S^2$ where $\phi$ and $\theta$ are spherical polars. Now $SO(3)$ simply acts by $$\theta \mapsto \theta + \epsilon, \ \phi \mapsto \phi + \delta$$ and writing this diffeomorphism as $F:S^2 \to S^2$ I compute $$F^*(\omega) = d(\cos(\theta + \epsilon))\wedge d(\phi + \delta) = \cos(\epsilon) d(\cos\theta)\wedge d\phi - \sin(\epsilon) d(\sin \theta)\wedge d \phi$$ Is this correct? It doesn't seem like the form is invariant under $SO(3)$. Perhaps it's only meant to be a local symplectomorphism though. Am I allowed to claim that it is a local symplectomorphism because it gives the right result in the limit as $\epsilon \to 0$? I think that would be right, because it would mean that the Lie derivative vanishes. Thanks in advance! AI: How many Euler angles are there? Can you compute new $\theta$ and $\phi$ in terms of them? Can you represent these transformations as $\theta\mapsto\theta+\epsilon$, $\phi\mapsto\phi+\delta$?
H: How to solve this integral for a hyperbolic bowl? $$\iint_{s} z dS $$ where S is the surface given by $$z^2=1+x^2+y^2$$ and $1 \leq(z)\leq\sqrt5$ (hyperbolic bowl) AI: A related problem. Note that, $$ z=\sqrt{ 1+x^2+y^2 } \implies z_x=\frac{x}{\sqrt{ 1+x^2+y^2 }},\quad z_y=\frac{y}{\sqrt{ 1+x^2+y^2 }} $$ $$ \iint_{s} z dS = \iint_{D} z \sqrt{1+\left(\frac{\partial z}{\partial x}\right)^2+\left(\frac{\partial z}{\partial y}\right)^2} dA $$ $$ = \iint_{D} \sqrt{1+(x^2+y^2)}\sqrt{{\frac {1+2\,{x}^{2}+2\,{y}^{2}}{1+{x}^{2}+{y}^{2}}}}\,dxdy $$ $$ =\iint_{D} \sqrt{{{1+2\,{x}^{2}+2\,{y}^{2}}{}}}\,dx dy $$ Now, $D\equiv \left\{ x^2+y^2 \leq 4 \right\}$. To see this notice that $$ 1 \leq z\leq\sqrt5 \implies 1 \leq \sqrt{1+x^2+y^2} \leq\sqrt5 \implies x^2+y^2\leq 4. $$ So, we can use polar coordinates as $$ = \int_{0}^{2\pi}\int_{0}^{2} \sqrt{1+2 r^2}\,r\, dr d\theta = \frac{26\pi}{3} . $$ Added: if you want to parametrize the surface, you go this way, $$ x=r\cos(\phi),\quad y = r\sin(\phi), \quad z^2 = 1+x^2+y^2= 1+r^2 \implies z=\sqrt{1+r^2}. $$ You can write it in a vector form as $$ \textbf{T}(r,\phi)= r\cos(\phi)\textbf{i}+ r\sin(\phi)\text{j}+ \sqrt{1+r^2}\, \text{k} $$ $$ \implies T_r = \cos(\phi)\textbf{i}+ \sin(\phi)\text{j} + \frac{r}{\sqrt{1+r^2}} \text{k}, $$ $$ T_\phi = -r\sin(\phi)\textbf{i} + r\cos(\phi)\text{j}+ 0 \,\text{k}.$$
H: Application of Open Mapping Theorem This was stated without proof in the complex analysis text I am reading (Complex Made Simple by Ulrich, page 107). I'm sure it's easy, but I'm tired and need a little help. Let $f$ be nonconstant and holomorphic in some region $V$ and assume $f'$ is nonconstant. Define $$\Omega = \left\{ \frac{f(z)-f(w)}{z-w}: z,w\in V, z\neq w \right\}.$$ Why is $\Omega$ open? I don't see how to use the hypothesis that $f'$ is nonconstant. To be clearer: I tried something like the answers below. But Ulrich specifically mentions that the hypothesis that $f'$ is nonconstant is needed in addition to the hypothesis that $f$ is nonconstant. So I feel I'm missing some subtle point about the derivative. Edit: After reading the answers below, the key point is $f'$ nonconstant implies the difference quotient is nonconstant. AI: Given $w\in V$, define $$g_w(z):V\setminus\{w\}\to\mathbb C,\quad z\mapsto \frac{f(z)-f(w)}{z-w}.$$ Since $f$ is holomorphic on $V$ and since $f'$ is not constant, $g$ is a nonconstant holomorphic function on $V\setminus\{w\}$. Noting that $V\setminus\{w\}$ is open, by open mapping theorem, $g_w(V\setminus\{w\})$ is an open subset of $\mathbb C$. It follows that $$\Omega=\bigcup_{w\in V}g_w(V\setminus\{w\})$$ is open.
H: How to graph an absolute value equation? How would you graph: $|x+y|=1$ ? I can do the normal $y=|x+1|$ and all that. But how would you do a question with two of these unknowns in the absolute value? Any help would be greatly appreciated, thanks. AI: It doesn't seem as though $|x+y|=1$ is a function, since for $x=2$, the values $y=-1$ and $y=-3$ both satisfy the equation. The way to graph this would be to graph two individual equations: $x+y=1$ and $x+y=-1$ (these two equations result when we lift the absolute value). So we graph separately: $$y=-x+1$$$$y=-x-1$$
H: value of fraction Can we find out the value of $$\frac{1+i}{1-i}$$ I have tried to solve it by multiplying $(1+i)$ to both sides and in the end see that the result is still $i$. Am I correct,or is there a different solution? AI: You divide by complex numbers, by multiplying by its conjugate, that is, replace in the divisor $+i$ by $-i$. The product of $(a+ib) \cdot (a-ib) = a^2+b^2$, a real. $$ \frac{1+i}{1-i}\frac{1+i}{1+i}=\frac{2i}2=i. $$
H: How can I practice Jean-Robert Argand idea of the rotation of a square root of -1 I am studying complex numbers and I really need an intuition on how they work. I found the following video of Mathematician named Adrien Douady https://www.youtube.com/watch?v=2kbM96Jr4nk He explains complex numbers in a algebraic geometry way, as he mentions using his protractor and turning it by 180 degrees to represent multiplication by (-1). But then at 4:26 he mentiond a mathematician Robert Argand http://en.wikipedia.org/wiki/Jean-Robert_Argand, and that he had a great Idea. "He said to himself, since multiplying by -1 is a 180 degree rotation, the square root is 1/2 of 180 degree rotation. How is this? how can I practice this? from what philophy of mathematics does this come from? Thank and forgive me if I sound to dumb. AI: For vectors on the plane, the mapping $v\mapsto c\cdot v$ is zooming the vector by scalar $c$, if $c\in\Bbb R$. When we compose two such maps, $f:=v\mapsto c\cdot v$ and $g:=v\mapsto d\cdot v$, we have that $g\circ f=v\mapsto d\cdot c\cdot v$. In particular, if $c=d$, we get $v\mapsto c^2\cdot v$. The mapping $s:v\mapsto -v$ can be indeed viewed as the rotation by $180^\circ$. Let us find its square root(s)! Both rotations $r_{1,2}$ by $\pm90^\circ$ are such that $r_1\circ r_1=s=r_2\circ r_2$. This can be set in a correspondence with the complex imaginary units $\pm i$. This way, complex numbers can be identified with 'zooming-and-rotating' transformations of the plane: The complex number $1$ corresponds to the identity, i.e. zoom by $1$ and rotate by $0^\circ$. $0$ corresponds to the degenerated zooming by $0$, i.e. the mapping $v\mapsto 0$ A positive real number $c$ corresponds to the zooming by $c$ (i.e. $v\mapsto c\cdot v$) and no rotation (i.e. rotating by $0^\circ$) A negative real number $-c$ (with $c>0$) corresponds to the zooming by $c$ and rotating by $180^\circ$. The imaginary unit $i$ corresponds to rotation by $+90^\circ$ (and no zooming, i.e. zoom by $1$). Addition of complex numbers corresponds to 'pointwise addition' of these transformation: $$(f+g):=v\mapsto f(v)+g(v)$$ (There's an important point hidden here: try to prove that $f+g$ is again a 'zooming-and-rotating' transformation.) Multiplication of complex numbers corresponds to composition of these transformation.
H: Find vectors vertical to given vectors with certain length Given the vectors $\mathbf{u,v}$ in R³, determine all vectors that are vertical to $\mathbf{u}$ and $\mathbf{v}$ with length = 1 Every vector $\mathbf{x'}$ that is to be found must meet these two conditions: $\mathbf{x'} \cdot \mathbf{u} = \mathbf{x'} \cdot \mathbf{v}=0$ $\mathbf{x'} \times \mathbf{u}= \mathbf{x'} \times \mathbf{v}=1$ My approach is to insert the given vector components into the first condition and get something like this $ ax_1' + bx_2' + cx_3' =0 $ since the length of the vectors should be 1, there's $||x'||=\sqrt{( x' \cdot x')}=\sqrt{x_1'^2+x_2'^2+x_3^3}$ is that the right way to go? I don't really know how to tackle this AI: If you want it to be perpendicular to both $\mathbf{u}$ and $\mathbf{v}$ then take $ \mathbf{u} \times \mathbf{v}$. And then to normalize it (so that it has length 1) just divide this vector by its norm to get $$\mathbf{x'}=\frac{\mathbf{u} \times \mathbf{v}}{|| \mathbf{u} \times \mathbf{v}||}$$
H: Are the area of a circle inscribed in a square and the area of the "spandrels" (the four corners that remain) commensurable? And how would you demonstrate that most simply? See the beginning of my blog post for a little more: http://seekecho.blogspot.fr/2013/02/different-ilks.html AI: So you want to show that the area of the circle, i.e. $\pi$, is not commensurable with the area of “square minus circle”, i.e. $4-\pi$, right? Two things are commensurable if their fraction is a rational number. Now look at it this way: $$\frac{4-\pi}\pi = \frac4\pi-1 = r\not\in\mathbb Q$$ Suppose $r$ were a rational number, then all the following would be rational numbers as well: \begin{align*} \frac4\pi &= r + 1 \\ \frac1\pi &= \frac{r+1}4 \\ \pi &= \frac4{r+1} \end{align*} But as you know that $\pi$ is irrational, you know that $r$ cannot be rational. Given two incommensurable numbers $a$ and $b$, the set $\{pa+qb\;\vert\;p,q\in\mathbb Q\}$ can be interpreted as a vectorspace over $\mathbb Q$ with dimension two. You can check all the vector space axioms to see that this is true. The switch from “circle and square” to “circle and spandrel” is simply a change in basis vectors, but does not make the vectors linearily dependent. So as a general conclusion, two numbers $p_1a+q_1b$ and $p_2a+q_2b$ will still be incommensurable unless they are linearily dependent in the vector space sense, i.e. unless $$\begin{vmatrix} p_1 & p_2 \\ q_1 & q_2 \end{vmatrix} = p_1q_2 - p_2q_1 = 0$$
H: Given $x_1=1,x_2=2,x_{n+2}=3x_{n+1}-x_n\forall n\in\mathbb N$. Find $x_n$. Given $x_1=1,x_2=2,x_{n+2}=3x_{n+1}-x_n\forall n\in\mathbb N$. Find $x_n$. I tried to find ways to telescope, but failed. Please help. Thank you. AI: There is a general way to solve those type of recurrences, through the characteristic polynomial of the relation. $$ x_{n+2}+ax_{n+1}+bx_{n} = 0 \leadsto X^2+aX+b=0 $$ Here, it'd be $X^2-3X+1$. The roots are $\phi=\frac{3+\sqrt{5}}{2}$ and $\bar{\phi}=\frac{3-\sqrt{5}}{2}$.Any solution $(x_n)_{n\in\mathbb{N}^\ast}$ is of the form $x_n = \alpha \phi^{n-1} + \beta \phi^{n-1}$ (since the two roots ${\phi},\bar{\phi}$ are distinct). Then use the initial conditions $x_1$ and $x_2$ to get $\alpha,\beta$.
H: How do I calculate typical group size? If I have a set of groups of individuals (e.g. people), and I want to calculate the typical group size (as observed by individuals), how do I do this? Wikipedia refers to this as "mean crowding" or "Typical Group Size" but doesn't give a formula: http://en.wikipedia.org/wiki/Group_size_measures I believe the "typical group size" for a set of group sizes $\{a_1, a_2,..., a_n\}$ is simply: $$\sqrt{\frac{1}{n}\sum_{i=1}^na_i^2 }$$ i.e. the root mean square or quadratic mean. Is that correct? Is there a better way to characterise typical group size? AI: The precise definition (according to the Wikipedia link in your question) seems to be that the typical group size is the average size of the group as observed by the individuals of your population. Thus, if you have a population of $N$ individuals, split into $K$ groups with size $n_k$ (thus, $\sum_{k=1}^K n_k = N$), the typical group size would be $$ T = \frac{1}{N} \sum_{k=1}^K \left(\sum_{i=1}^{n_K} n_k\right) = \frac{1}{N}\sum_{k=1}^K n_k^2 $$ since the $n_k$ individuals in group $k$ all observe a group size of $n_k$. It's entirely possible that it's for some reason more convenient to define the typical group size as $\sqrt{T}$ instead of plain $T$, but it then ceases to be the average group size as observed by the individuals.
H: get the distribution function I have the following density function: $ f(x) = \left\{ \begin{array}{l l} cx^2+|x| & \quad \text{if -1/2<x<1/2}\\ 0 & \quad \text{otherwise} \end{array} \right.$ we know that $\int_{-\infty}^{\infty}{f(x)dx = 1}=\int_{-1/2}^{1/2}{cx^2+|x|dx = 1}$ and I get $c=9$. The distribution function should be: $ F(x) = \left\{ \begin{array}{l l} 3x^3-x^2/2 & \quad \text{if -1/2<x<0}\\ 3x^3+x^2/2 & \quad \text{if 0 <= x<1/2}\\ 0 & \quad \text{otherwise} \end{array} \right.$ If i want to test if $F(x)->1, \text{as} \space x->\infty$ it is not $1$, is the distribution function right? AI: The cdf is not right. We will use the fact that $$F(x)=\int_{-\infty}^x f(t)\,dt,$$ where $f(t)$ is the density function. Thus $F(x)=0$ for $x\lt -\frac{1}{2}$. For $-\frac{1}{2}\le x\le 0$, we have $$F(x)=\int_{-1/2}^x (9t^2-t)\,dt.$$ The antiderivative was computed correctly, but the substitution of endpoints was not. Once you compute an antiderivative $G(t)$, you need to find $G(x)-G(-1/2)$. You should end up with $\frac{1}{2}$ more than what you actually got. For $0\lt x\le \frac{1}{2}$, $$F(x)=\frac{1}{2}+\int_0^x (9t^2+t)\,dt.$$ Finally, $F(x)=1$ for $x\gt \frac{1}{2}$.
H: Show that $\frac {a_1^2}{a_2}+\frac {a_2^2}{a_3}+...+\frac {a_n^2}{a_1}\geq a_1+a_2+...+a_n$ using AM-GM. Given $a_1,a_2,...,a_n$ be positive reals. Show that $\displaystyle\frac {a_1^2}{a_2}+\frac {a_2^2}{a_3}+...+\frac {a_n^2}{a_1}\geq a_1+a_2+...+a_n$ using AM-GM. I know how to slve it using rearrangement inequality, but I can't. How should I apply AM-GM? Thanks. AI: $$\dfrac{a^2_{1}}{a_{2}}+a_{2}\ge 2a_{1}$$ $$\dfrac{a^2_{2}}{a_{3}}+a_{3}\ge 2a_{2}$$ $$\cdots\cdots$$ $$\dfrac{a^2_{n}}{a_{1}}+a_{1}\ge 2a_{n}$$ add all inequalities and you're done!
H: sum of ten squares You are given an unlimited supply of $1\times 1,2\times 2,3\times 3,4\times 4,5\times 5,6\times 6$ squares.Find a set of ten squares whose areas add up to $48$.If not the whole solution,even a little prod in the right direction would help. AI: One solution is obvious: 8 squares of 1x1 1 square of 2x2 1 square of 6x6 $$ Area = (8 \times 1) + (2 \times 2) + (6 \times 6) = 48$$
H: Calculate gas-station probabilities I would like to calculate probabilities for the next exercise: Knowing the average amount of cars that drive per minute into a gas-station is 3. ** How can I calculate the probability of arriving at least 12 cars into the station in a period of 5 minutes? ** And the probability of a car arriving before two minutes go by? Thank you all very much in advance. AI: We use a Poisson model. So if $X$ is the number of cars in randomly chosen $1$ minute interval, then $X$ is assumed to have Poisson distribution with parameter $3$. Thus by general theory, the number $Y$ of cars in a $5$ minute interval has Poisson distribution with parameter $\lambda=15$. thus the probability of fewer than $12$ cars in a $5$ minute interval is $$\sum_{k=0}^{11}e^{-\lambda}\frac{\lambda^k}{k!}.\tag{$1$}$$ This is a somewhat unpleasant calculation. Maybe compute from $k=11$ down. After a while, the terms you are adding become negligible. The probability of at least $12$ cars is $1$ minus the probability computed in $(1)$. The number $Z$ of cars in a $2$ minute interval has Poisson distribution with parameter $6$. We want the probability of at least one car, which is $1$ minus the probability of no cars. The probability of no cars is $e^{-6}$. Alternately, for the no cars problem one can use the relationship between the Poisson and the exponential.
H: A problem related to an integral equation I am stuck on the following problem that is as follows: The integral equation $\quad \varphi(x)-\lambda \displaystyle\int_{-1}^{1}\cos[\pi(x-t)]\varphi(t) dt= g(x)$ has 1.a unique solution for $ \lambda \ne 1$ when $g(x)=x$ 2.no solution for $\lambda \ne 1$ when $g(x)=1$ 3.no solution for $\lambda =1$ when $g(x)=x$ 4.infinite number of solutions for $\lambda =1$ when $\quad g(x)=1.$ What I did: $$\begin{align}\quad g(x)&= \varphi(x)-\lambda \displaystyle\int_{-1}^{1}\cos[\pi(x-t)]\varphi(t)dt \\ \implies g(x) &=\varphi(x)-\lambda \displaystyle \int_{-1}^{1}\cos[\pi x-\pi t]\varphi(t)dt\\ &= \varphi(x) -\lambda \displaystyle\int_{-1}^{1} [\cos \pi x \cos \pi t +\sin \pi x \sin \pi t]\varphi(t)dt \\&=\varphi(x)- \lambda\cos \pi x\displaystyle\int_{-1}^{1} \cos \pi t \varphi(t)dt -\sin \pi x \displaystyle\int_{-1}^{1} \sin \pi t \varphi (t) dt \\&=....??\end{align}$$ Can someone explain with some details about how to tackle these types of problem.Thanks in advance for your time. MY REATTEMPT TO THE PROBLEM: ( Using @Glen O 's suggestion)From the given problem we see,$$\begin{align}\quad \varphi(x)&=g(x)+\lambda \{A \cos (\pi x)+B \sin (\pi x)\} \end{align}$$,where $A=\displaystyle \int_{-1}^1 \cos(\pi t)\varphi(t)dt \longrightarrow \clubsuit$ and $B=\displaystyle \int_{-1}^1 \sin(\pi t)\varphi(t)dt \longrightarrow \spadesuit$ . After calculations,we get from $ \clubsuit \space \text{and}\space \spadesuit \space , A(1-\lambda)=\displaystyle \int_{-1}^1 \cos(\pi t)g(t)dt \space \text{and} \space B(1-\lambda)=\displaystyle \int_{-1}^1 \sin(\pi t)g(t)dt$ . Now,(1) for $\lambda \ne 1,g(t)=t$,we get $A(1-\lambda)=0,B(1-\lambda)=\frac{2}{\pi}$ and so we can have a unique value of $A$ and $B$ and hence option $1$ is true. (2)For $\lambda \ne 1,g(t)=1,$ we get $A(1-\lambda)=0,B(1-\lambda)=0$ and so we have $A=B=0$ and hence equation turns out to be $\varphi(x)=g(x)$ and hence no solution and so statement $2$ holds good. (3)For $\lambda= 1,g(t) =t,$ we get $ 0 \times B=\frac{2}{\pi}$ which is impossible and hence statement $3$ does not hold. (4)For $\lambda= 1,g(t) =t,$ we get $A(1-\lambda)=0,B(1-\lambda)=0$ and infinite number of values for $A,B$ is posssible. And hence statement $4$ holds true. Am I missing something in my argument? Please feel free to comment. AI: Hint: As you correctly note, you may write the integral equation in the form $$ \varphi(x) - \lambda \cos(\pi x)\int_{-1}^1\cos(\pi t)\varphi(t)dt-\lambda \sin(\pi x)\int_{-1}^1\sin(\pi t)\varphi(t)dt=g(x) $$ Notice that the integrals evaluate to simple numbers. So we have a solution of the form $$ \varphi(x) = g(x)+\lambda\big(A\cos(\pi x)+B\sin(\pi x)\big) $$ Now, $A=\int_{-1}^1 \cos(\pi t)\varphi(t)dt$. What do you get if you substitute our solution in for $\varphi(t)$, given $g(x)$? Similarly, what do you get for $B$?
H: Intersection between sphere and cylinder I have a sphere and a cylinder. I have the center and the radius of each of them. the sphere: radius = $r_1$ center = $(x_1,y_1,z_1)$ the cylinder: radius = $r_2$ height = $h_2$ center = $(x_2,y_2,z_2)$ how do I know if there is an intersection? I read this one: http://en.wikipedia.org/wiki/Sphere%E2%80%93cylinder_intersection but my center is not $(x_1,0,0)$ but $(x_1,y_1,z_1)$. the radius in my sphere is 1, and the center point is $(x_1,y_1,z_1)$ the radius in my cylinder is 1, height is 10 and the center point is $(x_2,y_2,z_2)$. I thought to convert my center point of $(x_1,y_1,z_1)$ into $(x_1,0,0)$ and then know if there is a intersection or not, according to the reference I linked.. any help appreciated! AI: You can translate the points to put the center of the sphere at the origin by making the center of the sphere $(0,0,0)$ and the center of the cylinder $(x_2-x_1,y_2-y_1,z_2-z_1)$ This is a translation of the space. If you want your results in the original coordinate system, just add $(x_1,y_1,z_1)$ to them.
H: Simplifying fractions - Ending up with wrong sign I've been trying to simplify this $$ 1-\frac{1}{n+2}+\frac{1}{(n+2) (n+3)} $$ to get it to that $$ 1-\frac{(n+3)-1}{(n+2)(n+3)} $$ but I always end up with this $$ 1-\frac{(n+3)+1}{(n+2)(n+3)} $$ Any ideas of where I'm going wrong? Wolfram Alpha gets it to correct form but it doesn't show me the steps (even in pro version) Thanks AI: Everywhere there is a minus sign, replace it with plus a negative. So with your original expression, try instead simplifying $$ 1+\frac{-1}{n+2}+\frac{1}{(n+2) (n+3)} $$ and you should be much less prone to error.
H: Power series of $\frac{\sqrt{1-\cos x}}{\sin x}$ When I'm trying to find the limit of $\frac{\sqrt{1-\cos x}}{\sin x}$ when x approaches 0, using power series with "epsilon function" notation, it goes : $\dfrac{\sqrt{1-\cos x}}{\sin x} = \dfrac{\sqrt{\frac{x^2}{2}+x^2\epsilon_1(x)}}{x+x\epsilon_2(x)} = \dfrac{\sqrt{x^2(1+2\epsilon_1(x))}}{\sqrt{2}x(1+\epsilon_2(x))} = \dfrac{|x|}{\sqrt{2}x}\dfrac{\sqrt{1+2\epsilon_1(x)}}{1+\epsilon_2(x)} $ But I can't seem to do it properly using Landau notation I wrote : $ \dfrac{\sqrt{\frac{x^2}{2}+o(x^2)}}{x+o(x)} $ and I'm stuck... I don't know how to carry these o(x) to the end Could anyone please show me what the step-by-step solution using Landau notation looks like when written properly ? AI: It is the same as in the "$\epsilon$" notation. For numerator, we want $\sqrt{x^2\left(\frac{1}{2}+o(1)\right)}$, which is $|x|\sqrt{\frac{1}{2}+o(1)}$. In the denominator, we have $x(1+o(1))$. Remark: Note that the limit as $x\to 0$ does not exist, though the limit as $x$ approaches $0$ from the left does, and the limit as $x$ approaches $0$ from the right does.
H: Raise a number to the "y" power without using exponents. This is kind of a spinoff on my question Divide by a number without dividing. Can anyone think of some clever ways to raise any given number to any given power without using an exponent anywhere in your equation/formula? $$x^{y}=z$$ AI: You can always use the Taylor series for $f(u) = e^u$. $$ x^y = 1 + y \ln x + \frac{(y \ln x)(y \ln x)}{2!} + \frac{(y \ln x)(y \ln x)(y \ln x)}{3!} + \cdots $$
H: Non-vanishing 2-form on quartic surface. Let $S\subset \mathbb P^3$ be a quartic surface defined by a homogeneous degree 4 polynomial $F\in k[x_0,x_1,x_2,x_3]$. $S$ is a K3 surface, so it has a unique non-vanishing $(2,0)$-form $\omega$ up to scalar. How can this $\omega$ be computed? AI: If $S$ is smooth, $\omega$ can be calculated as a Poincaré residue (see Griffiths and Harris, pg. 147, for details). It boils down to the following: in an affine patch with coordinates $z_1,z_2,z_3$, where $F$ is represented by $f(z_1,z_2,z_3)$, $$ \omega = \int_{f=0} \frac{dz_1\wedge dz_2\wedge dz_3}{f} $$ where the integral is a higher-dimensional contour integral -- it is taken over the boundary of some arbitrarily smooth tubular neighbourhood of $S$.
H: A simple question about a bounded function Let $f$ be a function defined on $[0,\infty)$. If $|f(x)| \leq M$ for all $ x \in [0, \infty)$, then can I say $$ \exists C,R >0 : |f(x)| \leq \frac{C}{(1+x)^2}\;\;(x \geq R) $$ is equivalent to $$ \exists C >0 : |f(x)| \leq \frac{C}{(1+x)^2} \;\; (\forall x \in [0,\infty))\;?$$ I think this is true because for $ x \in [0, R)$, $$ |f(x)| \leq M = \frac{M(1+x)^2}{(1+x)^2} \le \frac{M(1+R)^2}{(1+x)^2} = \frac{C'}{(1+x)^2}.$$ I want to know that my statements are true or not. Thank you. AI: Your proof is correct. In particular, you have shown that if there exist $R$ and $C$ as specified in the first equation, you can take $C$ in the second equation to be the maximum of $C$ and $C'$. Thus the first equation implies the second. And the second equation also implies the first (pick $R$ to be anything at all). Thus, the two equations are equivalent.
H: Linear combination of vectors in $\mathbb{R}^3$ Show that any linear combination of $\pmatrix{1\\\frac{3}{2}\\0}$ and $\pmatrix{0\\3\\6}$ is also a linear combination of $\pmatrix{2\\3\\0}$ and $\pmatrix{0\\1\\2}$ I'm not sure how to do this. I have a proof sketch for showing that any 2-dimensional vector is a linear combination of any other two nonparallel 2d vectors, but as far as I understand, this is not the case for 3d vectors. So how should I go about showing this? Thanks. AI: Hint: Your third and fourth vectors are just scaled versions of the first two vectors.
H: Apples and their volumes An apple has a peel that is 1cm thick and a total diameter of 12cm. What percentage of volume of the apple is the peel? I tried $$\frac{\text{volume}(\text{radius of 6})-\text{volume}(\text{radius of 5})}{\text{volume}(\text{radius of 6})}$$ and got 42%. Is this correct? AI: $$ \frac{\frac{ 4\pi6^3}{3} - \frac{4\pi 5^3}{3}}{\frac{4\pi 6^3}{3}} = \frac{6^3 - 5^3}{6^3} = \frac{216-125}{216} = \frac{91}{216} \approx 0.42 $$ You are right.
H: How many options are there for 15 student to divide into 3 equal sized groups? How many options are there for 15 students to divide into 3 equal-sized groups? Now I know the solution is $\;\dfrac {15!}{5!5!5!3!}\;$ but I can't understand why. Can anyone please enlighten me? AI: Consider the following points: Suppose that we line up the students in a line and choose the first 5 students to belong to group 1, the next 5 to group 2 and the last 5 to group 3. There are 15! ways to line up students in a line. However, in any particular line up, the order of the students in each group does not matter. Thus, for each line up, we have 5! 5! 5! ways of arranging the students in each group. Therefore, the number of ways to arrange students in 3 equal groups is: $$\frac{15!}{5! 5! 5!}$$ But, the labeling of the groups also does not matter. We can label the group 1 to be group 2 and group 2 to be group 3 and so on. We have 3! ways to label the groups. Thus, the final number of ways to arrange students in 3 equal groups is: $$\frac{15!}{5! 5! 5! 3!}$$
H: On finding adjoint of transformation. Let $V$ be an inner product space and $v,w\in V$ be fixed vectors. Define $T(u)=(u,v)w$. How to find the adjoint mapping $T^*$? AI: From the definition of adjoint: $(Tu, z) = (u, T^*z)$ So, for all $u, z \in V$: $(Tu, z) = ( (u,v)w, z ) = (u,v) \cdot (w,z) = (u, \overline{(w,z)}v) = (u, (z, w) v)$ So, $T^*(z) = (z,w)v$.
H: Does $u\in L^p(B)$ implies $u_{|\partial B_t}\in L^p(\partial B_t)$ for almost $t\in (0,1]$? Let $B$ be the unit ball in $\mathbb{R}^N$ with center in origin and consider the space $L^p(B)$ with Lebesgue measure ($1<p<\infty$). Let $B_t\subset B$ be a concentric ball of radius $t\in (0,1]$. For fixed $t$ we can define $L^p(\partial B_t$) in the same sense as $L^p(B)$, by using the surface measure of $\partial B_t$. My question is: Let $u\in L^p(B)$. Is true that for almost every $t\in (0,1]$ (Lebesgue measure), it does make sense to talk about the (classical) restrictuon of $u$ to $\partial B_t$, to wit, $u_{|\partial B_t}$ and $u_{|\partial B_t}\in L^p(\partial B_t)$? Thank you AI: By the change of variables theorem for Lebesgue integration on $\mathbb{R}^n$ we have $$ \int_{B}|f(x)|^pdx=\int_0^1t^{n-1}\Bigl(\int_{\partial B}|f(t\,x)|^p\,d\sigma\Bigr)\,dt=\int_0^1\Bigl(\int_{\partial B_t}|f(x)|^p\,d\sigma_t\Bigr)\,dt<\infty, $$ where $d\sigma$ is surface measure on $\partial B$ and $d\sigma_t$ is surface measure on $\partial B_t$. Thus $$ \int_{\partial B_t}|f(x)|^p\,d\sigma_t<\infty\quad $$ for almost every $t\in(0,1\,]$.
H: Which of the following are subspaces of $M$? Let $M$ be a vector space of all $3\times 3$ real matrices and let $$A=\begin{pmatrix}2&3&1\\0&2&0\\0&0&3\end{pmatrix}.$$ Which of the followings are subspaces of $M?$ $\{X\in M:XA=AX\}$ $\{X\in M:X+A=A+X\}$ $\{X\in M:\text{trace}(XA)=0\}$ $\{X\in M:\det(XA)=0\}$ AI: Hints: $X+A=A+X$ for all $X\in M$, since matrix addition is commutative. The trace function is additive. Since $\det(A)\neq 0$, $\det(XA)=0$ iff $\det(X)=0$.
H: Differentiability of $f(x) = x^2 \sin{\frac{1}{x}}$ and $f'$ Let $f(x) = x^2 \sin{\frac{1}{x}}$ for $x\neq 0$ and $f(0) =0$. (a) Use the basic properties of the derivative, and the Chain Rule to show that $f$ is differentiable at each $a\neq 0$ and calculate $f'(a)$. You may use without proof that $\sin$ is differentiable and that $\sin' =\cos$. Not even sure what this is asking. (b) Show that $f$ is differentiable at $0$ and that $f'(0) =0$. $\frac {f(x)-f(0)}{x-0} \to \lim_{x \to 0} x \sin(1/x)$. $x \sin(1/x) \leq |x|$ and $\lim_{x \to 0} |x|=0$. Thus $f(x)$ is differentiable at $0$; moreover $f^{'}(0)=0$. (c) Show that $f'$ is not continuous at $0$. $f{'}(x)=x^{2} \cos(1/x) (-x^{-2}) + 2x \sin (1/x)$. In pieces: $\lim_{x \to 0} \cos (1/x)$. $f^{'}(0-)$ nor $f{'}(0+)$ exists as $x \to 0$ $f^{'}(x)$ oscillates infinity between $-1$ and $1$ with ever increase frequency as $x \rightarrow 0$ for any $p>0$ $[-p,0]$, $[-p,p]$ or $[0,p]$ $f$ is not continuous. Question: How to show more rigorously? AI: $x^2$ is continuous and differentiable over $\mathbb{R}$ $\sin(x)$ is continuous and differentiable over $\mathbb{R}$ $\frac 1 x$ is continuous and differentiable over all $\mathbb{R}$ except $0$. And is a function of $\mathbb{R \to R}$ $\displaystyle \sin\left(\frac 1 x\right)$ is therefore continuous and differentiable for all $\mathbb R$ except $0$ where it is undefined. $\displaystyle x^2\sin\left(\frac 1 x\right)$ is therefore continuous and differentiable for all $\mathbb R$ except possibly $0$. To compute $f'(a)$ use product rule followed by chain rule to find: $$F'(a) = 2a\sin\left(\frac 1 a\right) - \cos\left(\frac 1 a\right)$$
H: Proof that an embedding into $\ell^1$ is compact Prove that any sequence $(x^{(n)})_{n\in\mathbb{N}}\subseteq\ell^1$ such that $\sum_{k=1}^\infty k\lvert x_k^{(n)}\lvert\leq1$ for all $n\in\mathbb{N}$ has a convergent subsequence. My thoughts on this: Clearly $\lvert x^{(n)}_k\lvert\leq\frac{1}{k}$ uniform in $n$. Therefore the sequence $x_1^{(n)}$ has a convergent subsequence $x_1^{(\tilde{n}_k)}$. Further extracting subsequences for any fixed $N\in\mathbb{N}$ I find $({n_l})_{l\in\mathbb{N}}\subseteq\mathbb{N}$ such that $x_k^{(n_l)}$ converges to some $x_k$ for all $1\leq k\leq N$ as $n\rightarrow\infty$. If for every $\epsilon>0$ I could find some $N\in\mathbb{N}$ such that $\sum_{k=N+1}^\infty \lvert x_k^{(n)}\lvert<\epsilon$ uniform in $n$ the proof would be complete. But I don't see why this should be true. Can you give me some hint? AI: Note: here is, I believe, the result you wanted to prove initially. The fact that the following operator is compact follows easily from the fact that it is the operator norm limit of finite rank operators. Consider the bounded linear operator $$ T:\ell^1\longrightarrow \ell^1\qquad (x_k)\longmapsto \left(\frac{x_k}{k}\right). $$ Since $T$ is the operator norm limit of the finite rank operators, $$T_n:(x_k)\longmapsto \left(x_1,\frac{x_2}{2},\ldots,\frac{x_n}{n},0,\ldots\right),$$ it is compact. Indeed, $\|T-T_n\|\leq \frac{1}{n+1}$. So if $B$ denotes the closed unit ball of $\ell^1$, then $T(B)$ is relatively compact in $\ell^1$. Now just observe that $$ T(B)=\{(y_k)\in\ell^1\;;\; \sum_{k=1}^{+\infty} k\,|y_k|\leq 1\} $$ is the set you are considering. It is easily seen to be closed in $\ell^1$. Indeed, if $y^{(n)}$ is in $T(B)$ and converges to $y$ in $\ell^1$, then it converges pointwise to $y$ a fortiori. So for every $K$ and every $n$, we have $$ \sum_{k=1}^K k\,|y^{(n)}_k|\leq 1\quad \Rightarrow \quad \sum_{k=1}^K k\,|y_k|\leq 1\quad \Rightarrow\quad \sum_{k=1}^{+\infty} k\,|y_k|\leq 1. $$ Conclusion: your set $T(B)$ is a compact subset of $\ell^1$. What you want is just the sequential compactness of the latter: every sequence in $T(B)$ has a convergent subsequence in $T(B)$ (and not only in $\ell^1$).
H: Similar cones - volumes and lateral areas Two similar cones have volumes 9$\pi$ and 72$\pi$. If the lateral area of the larger cone is 32$\pi$, what is the lateral area of the smaller cone? I did the following... $\frac {(9\pi)^3} {(32\pi)^2} = \frac {x}{(32\pi)^2}$ resulting in a lateral area of $4\pi$. Is this right? AI: Since the two cones are similar the ratios of the radius, height and slant height of the larger cone is some multiple of the radius, height and slant height of the smaller cone. Let's call it $k$. This gives $r^{\prime} = kr$, $h^{\prime}=kh$ and $l^{\prime}=kl$ where the primed variables are the dimensions of the larger cone. We know that $\frac{1}{3}\pi r^2h = 9\pi\Rightarrow r^2h=27$. Likewise, $r'^2h'=216$. Substituting $r' = kr$ and $h' = kh$ gives $(kr)^2kh=216\Rightarrow k^3r^2h=216\Rightarrow k^3(27)=216\Rightarrow k^3=8\Rightarrow k=2$. Therefore, the larger cone is twice as big as the smaller cone in linear dimensions. We know that the lateral surface of the larger cone is $\pi r'l'=32\pi$ so $\pi (kr)(kl)=32\pi$ from which we get $k^2\pi rl=4\pi rl = 32\pi\Rightarrow \pi rl= 8\pi$. Therefore, the lateral surface area of the smaller cone is $8\pi$.
H: A special subset of uniformly distributed numbers is still uniformly distributed? Assume that I have a value range [1,1000]. I uniformly choose 10 numbers from [1,1000]. Assume that the chosen numbers are a1, a2, ..., a10. Besides, assume that they are ordered so that a1< a2< ...< a10. Here comes my question. If I always choose 10 numbers, order these 10 numbers, and always pick the first three numbers (e.g., a1, a2, and a3) from the ordered result, then can I still claim that these three numbers are uniformly chosen from [1,1000]? On the other hand, if I always choose 10 numbers, order these 10 numbers, and always pick those at even-positions (e.g., a2, a4, a6, a8, and a10) from the ordered result, then can I still claim that these five numbers are uniformly chosen from [1,1000]? AI: Unfortunately you can't. If you have $k$ uniformly distributed random variables in $[1,N]$ then the expected value of $a_1 + a_k$ will always be $N$. To see this, for every $i$ set $$b_i = N+1 - a_{k+1-i}.$$ So $b_1 = N+1 - a_k$, $b_2 = N +1 - a_{k-1}$ up to $b-k = N+1-a_1$. Now if I choose a uniformly distributed random variable $a\in[1,N]$ then $b=N+1-a$ is also uniformly distributed in $[1,N]$ so my $b_i$ are distributed exactly the same as by $a_i$. So $E(b_i) = E(a_i)$ for every $i$ and in particular $E(a_1 + a_k) = E(b_1 + a_k) = N+1$. In both your guesses you remove the lowest number, but not the highest so the expected value of the lowest remaining number plus the highest remaining number must be strictly greater than $1000$ and the remaining numbers cannot be independent uniforms. The only way of removing some of your $a_i$ such that the remaining values are iid uniform is to remove each of them independently with a fixed probability $p$.
H: How do I prove Poisson appraches Normal distribution I want to prove why the mean and variance of a $\operatorname{Poisson}(\lambda)$, is different when the time index approaches infinite (it's approximated by the mean and variance of a Normal). For example: $$ N_k = N_1 + (N_2 - N_1) + (N_3 - N_2) + ... + (N_k - N_{k-1}) $$ Using CLT: $\frac{N_k - k\lambda}{\sqrt{k\lambda}}$ is normally distributed (in the limit). I want to answer why is that a Poisson R.V. characterized by $\mathbb{E}[X] = \lambda$ and $\operatorname{Var}[X] = \lambda$. Because when it approaches a normal distribution, $\mathbb{E}[Z] = \mu$ and $\operatorname{Var}[Z] = \sigma^2$. AI: The Poisson distribution does not approach the normal distribution, the centered Poisson distribution does. More precisely, if $X_\lambda$ is Poisson with parameter $\lambda$, then $Y_\lambda$ converges in distribution to a standard normal random variable $Z$, where $Y_\lambda=(X_\lambda-\lambda)/\sqrt{\lambda}$. In particular, for every $\lambda$, $E[Y_\lambda]=E[Z]=0$ and $\mathrm{var}(Y_\lambda)=\mathrm{var}(Z)=1$ (in your language, $\mu=0$ and $\sigma^2=1$).
H: Set of left cosets is a group Let $N$ be a normal subgroup of $G$. Let $G/N = \{gN : g \in G \}$. Show that the $(G/N, \circ, 1_{G/N})$ is a group, where: $\circ : G/N \times G/N \rightarrow G/N$ is defined by $(gN \circ hN) = ghN$ and $1_{G/N} = 1*N$. Thanks for your help. AI: First verify that the group operation is well-defined, that is, show that the correspondence from $G/N\to G/N$ is actually a function. Assume for $a,a',b,b'\in G$ we have $aN=a'N,bN=b'N$. We must verify that $aNbN=a'Nb'N$, or $abN=a'b'N$. This step is necessary because when we are dealing with cosets, we must ensure that our function definition depends only on the cosets, and not the coset representatives. For this part, note that if $aN=a'N$, $bN=b'N$, then $a=a'n_1$ and $b=b'n_2$ for some $n_1,n_2\in N$. Therefore $a'b'N=an_1bn_2N=an_1bN=an_1Nb=aNb=abN$, using associativity and the fact that $N$ is normal in $G$. From here it should be easy to verify that the three group properties hold: Identity: $N\quad ((gN)(N)=gN)$ Inverses: $(gN)^{-1}=g^{-1}N\quad (gNg^{-1}N=gg^{-1}N=N) $ Associativity: $gN(hNkN)=ghkN=(gNhN)kN$
H: Decreasing from the horizontal asymptote The function $f(x) = x^2/(x^2 - x -2)$ has the following graph. It has a horizontal asymptote $y=1$. For $x$ less than $-4$, the function is decreasing and its graph is under the asymptote. How is this possible when $\lim_{n \to -\infty} f(x) = 1$? Can a function decrease away from its vertical asymptote? AI: The definition of increasing is $$ x_1<x_2\implies f(x_1)\ge f(x_1). $$ As $x$ increases, $f(x)$ decreases. But as $x$ decreases to $-\infty$, $f(x)$ increases to $1$. Maybe the following example will make things clear. $x^2$ decreases on $(-\infty,0)$, but $\lim_{x\to-\infty}x^2=+\infty$.
H: How to prove the existence of infinitely many $n$ in $\mathbb{N}$,such that $(n^2+k)|n!$ Show there exist infinitely many $n$ $\in \mathbb{N}$,such that $(n^2+k)|n!$ and $k\in N$ I have a similar problem: Show that there are infinitely many $n \in \mathbb{N}$,such that $$(n^2+1)|n!$$ Solution: We consider this pell equation,$n^2+1=5y^2$,and this pell equation have $(n,y)=(2,1)$,so this equation have infinite solution$(n,y)$,and $2y=2\sqrt{\dfrac{n^2+1}{5}}<n$. so $5,y,2y\in \{1,2,3,\cdots,n\}$, so $5y^2<n!$ then $(n^2+1)|n!$ but for $k$ I have consider pell equation,But I failed,Thank you everyone can help AI: Similar to your solution of $k=1$. Consider the pell's equation $n^2 + k = (k^2+k) y^2$. This has solution $(n,y) = (k,1)$, hence has infinitely many solutions. Note that $k^2 + k = k(k+1) $ is never a square for $k\geq 2$, hence is a Pell's Equation of the form $n^2 - (k^2+k) y^2 = -k$. Then, $2y = 2\sqrt{ \frac{ n^2+k} { k^2 +k } } \leq n$ (for $k \geq 2$, $n\geq 2$) always.
H: An integral problem? How do you integrate $e^{e^x}$? I was able to get it down to du/(ln u) but I wasn't able to go further. Thanks! AI: No, not Calculus AB level. This antiderivative is "not elementary" in the technical meaning of that term. https://en.wikipedia.org/wiki/Elementary_function added Maple says $$ \int \!{{\rm e}^{{{\rm e}^{x}}}}{dx}=-{\rm Ei}_1 \left(-{{\rm e}^{x}} \right) $$ where this "exponential integral" function is $$ \mathrm{Ei}_1(z) = \int_1^\infty\frac{e^{-tz}}{t}\;dt $$
H: In how many ways can you choose $k$ out of $n$ people standing in line, So there's a space of at least 3 people between them In how many ways can you choose $k$ out of $n$ people standing in line, So there's a space of at least $3$ people between them. Actually, I don't even know how to start on this one. AI: HINT: You can think of this as a stars-and-bars problem. The $k$ people chosen are the bars, the $n-k$ other people are the stars, and you have to distribute the stars so that there are at least $3$ between each pair of adjacent bars. Your problem is to count the ways to do this. Equivalently, let $x_0$ be the number of people to the left of the first person chosen, $x_i$ the number between the $i$-th and $(i+1)$-st people chosen (reading from left to right) for $i=1,\dots,k-1$, and $x_k$ the number to the right of the $k$-th person chosen. Then you want the number of solutions to $$x_0+x_1+\ldots+x_k=n-k$$ in non-negative integers such that $x_i\ge 3$ for $i=1,\dots,k-1$. You’ve probably seen other stars-and-bars problems by now, but even if you haven’t, the linked article gives a reasonably good explanation.
H: Prove that for all positive integer n, the inequality $2n\choose n$ $<4^n$ holds How do I prove that for all positive integer n, the inequality $2n\choose n$$<4^n$ holds? Thank you! AI: Hint: The LHS is the number of $n$-element subsets of $[2n]$, while the RHS is the number of all subsets of $[2n]$.
H: Defining a subset The question I have to answer is as following in Swedish: Hur många mängder X uppfyller {a, b, c} ⊆ X ⊆ {a, b, c, d, e}? Loosely translated (I do not know mathematical terminology well in English): How many sets X satisfy {a, b, c} ⊆ X ⊆ {a, b, c, d, e}? I do not know how to solve this question. For instance, would {{a,b}, a} be two valid subsets of {a, b, c, d, e}? Any insight into the main question is greatly appreciated. AI: It is fairly simple to list them. Sets have no duplicate elements by default. Sets do not include elements of different types or pairs. So you can't have pairs and elements. Pairs are generated through Cartesian products. {a, b, c, d, e} {a, b, c, d} {a, b, c, e} {a, b, c} So the answer is 4.
H: Help in a proof of a result in Hungerford's book I need help to proof the last part of this corollary: I didn't understand the part (IV) because the author proves just the canonical projection case and the statement says "every nonzero homomorphism of rings $R\to S$ is a monomorphism". Thanks in advance. AI: If every non-zero homomorphism of rings $R\to S$ is a monomorphism, then certainly the canonical projections are. On the other hand, if $h:R\to S$ is a surjective homomorphism, then $S\cong R/\ker h$, and you’re essentially looking at the canonical projection $\pi:R\to R/\ker h$ anyway.
H: Do there exist $29$ consecutive integers so that every of them has exactly $2$ distinct prime factors? Do there exist $29$ consecutive integers, denote $a,a+1,\cdots,a+28$, so that every of them has exactly $2$ distinct prime factors? For example, $25$ has only one distinct prime factor, and $30$ has $3$ distinct prime factors. These are my effors: 1.Since they are not divisible by $30$, so $a\equiv 1 \pmod {30}.$ 2.I wrote a code (Mathematica 9.0) for this problem: j = 0; i = 2; While[j < 29 && i < 10^8, If[Length[FactorInteger[i]] == 2, j = j + 1; i = i + 1, j = 0; i = i + 31 - Mod[i, 30]]]; Print[{j, i}] After runing this program, i find that there are no such numbers when $a<10^8$, it takes about $4$ minutes. Thanks in advance! AI: If the first number is $30n+1$ we note that: $$30n+6=6(5n+1)$$$$30n+12=6(5n+2)$$$$30n+18=6(5n+3)$$$$30n+24=6(5n+4)$$ So $5n+r$ is divisible only by $2$ or $3$ for $1 \le r\le4$. $5n+1$ and $5n+2$ are coprime, so one must be a power of $3$ and the other a power of $2$ (neither can be $1$). It is then easy to see that we can't also fit $5n+3$ and $5n+4$ into the same pattern.
H: Find $\frac{a^2}{2a^2+bc}+\frac{b^2}{2b^2+ac}+\frac{c^2}{2c^2+ab}$ if $a+b+c=0$ I'm stuck at this algebra problem, it seems to me that's what's provided doesn't even at all. Provided: $$a+b+c=0$$ Find the value of: $$\frac{a^2}{2a^2+bc}+\frac{b^2}{2b^2+ac}+\frac{c^2}{2c^2+ab}$$ Like I'm not sure where to start, and the provided clue doesn't even make sense. There's no way I can think of to factor this big polynomial into a form like $a\times p+b\times q+c\times r=s$ where $p,q,r,s\in\mathbb{Z}$. Thanks! AI: Note that: $$\frac{a^2}{2a^2+bc}=\frac{a^2}{a^2+a(-b-c)+bc}=\frac{a^2}{(a-b)(a-c)}$$ This applies in the same way for: $$\frac{b^2}{2b^2+ac}=\frac{b^2}{(b-c)(b-a)}\ \text{and}\ \frac{c^2}{2c^2+ab}=\frac{c^2}{(c-a)(c-b)}$$Therefore, the original equation is equal to: \begin{align} &\frac{a^2}{(a-b)(a-c)}+\frac{b^2}{(b-c)(b-a)}+\frac{c^2}{(c-a)(c-b)} \\ \\ &=\frac{-a^2(b-c)-b^2(c-a)-c^2(a-b)}{(a-b)(b-c)(c-a)} \\ \\ &=\frac{-a^2b-b^2c-c^2a+a^2c+b^2a+c^2b}{(a-b)(b-c)(c-a)} \\ \\ &=\frac{(a-b)(b-c)(c-a)}{(a-b)(b-c)(c-a)} \\ \\ &=\boxed{1} \end{align}
H: looking for reference for integral inequality Math people: I would like a reference for the following fact (?), which I proved myself (I am 99% sure the proof is valid) but which has probably been done before. My proof was a little messy. If no one can supply a reference, I would appreciate an elegant proof. Here is what I proved: Let $\Omega$ be an open subset of $\mathbf{R}^n$, $\delta >0$, and $K >0$. Then there exists $\kappa = \kappa(\delta, K, \Omega)>0$ with the following property: if $f , g \in C^\infty(\Omega)$ with $\inf_\Omega f \geq 0$, $\inf_\Omega g \geq 0$, $\int_\Omega f\,dx \leq K$, and $\int_\Omega g\,dx \geq \delta$, then $$ \int_\Omega \sqrt{f^2+g^2} - f \,dx > \kappa.$$ Please note that there are no assumptions on $\sup_\Omega f$: if there were, the proof would be easier. My proof did not use the smoothness of $f$ or $g$, and my $\kappa$ did not involve $\Omega$ at all. I would be OK with a $\kappa$ that did involve $\Omega$. I do not care how sharp the inequality is. I only need the result for $\Omega$ being a two-dimensional rectangle. I need this as part of a larger proof and it would be better to cite this fact, if it has been done before, than to include it in the paper. AI: One way of proving it is to write $$\int\frac{g^2}{\sqrt{f^2+g^2}+f}$$ and use CS to end up with $$\int\frac{g^2}{\sqrt{f^2+g^2}+f}\int\sqrt{f^2+g^2}+f\ge\left(\int g\right)^2.$$ Now $\int\sqrt{f^2+g^2}+f\le \int f+g+f\le 2K+\int g.$ So our integral bounded below by $$\frac{\left(\int g\right)^2}{K+\int g}$$ which can be easily estimated via $\delta.$
H: Calculating permutations if the sequences have to be in ascending order? How would you go about calculating the number of permutations in ascending order. Obviously if you had (a set of) 3 numbers you have $ 3! $ permutations: (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1) But only one of these is in ascending order (1,2,3). Consider the lottery - picking 6 numbers from 49. Normally the order for this doesn't matter you just calculate '49 choose 6' but if the numbers have to be in ascending order how would you calculate that? AI: In the same way: The number of (strictly) ascending sequences of $6$ numbers chosen from $49$ is $\binom{49}{6}$. For as you have pointed out, once the $6$ numbers have been chosen, there is precisely one way to arrange them in ascending order.
H: For what values of a ​​the function $y=x^6+ax^3-2x^3-2x^2+1$ is even I want to know for what valuyes this function is even I know that $f(x)=f(-x)$ to proove that function is even. how its helps me?$$y=x^6+ax^3-2x^3-2x^2+1$$ Thanks! AI: $$f(x)=f(-x)\\ x^6+ax^3-2x^3-2x^2+1=x^6-ax^3+2x^3-2x^2+1\\ (a-2)x^3=(2-a)x^3 $$ So for what values of $a$ is this equality satisfied? $$a-2=2-a\iff 2a=4\iff a=2$$
H: Partial fraction decomposition with a nonrepeated irreducible quadratic factor I'm trying to do a partial fraction decomposition on the following rational eqn with a nonrepeated irreducible quadratic factor: $$\dfrac{-28x^2-92}{(x-4)^2(x^2+1)}$$ I've broken it down into an identiy: $-28x^2 -92 = A((x-4)(x^2-1))+B(x^2+1)+(Cx+D)(x-4)^2$ and solved for B by setting x to 4 ($B-12$), distributed the -12 and moved the resulting quadratic to the left side, leaving me with: $$12x^2-28x-80 = A((x-4)(x^2-1)) + (Cx+D)(x-4)^2$$ Now I want to solve for A by setting x to something that will make the coefficient on $Cx+D$ zero, but I'm stumped - setting it to 4 also gets rid of my A. What do I do? Or have I messed up somewhere along the way? AI: To find out an equation of A and c compare coefficient of $x^3$.put x=0 and compare coefficient of $x^2$.after doing these all operation you will find equation and some values.If you stuck then leave a comment.I'll do it. you have this eqn $$-28x^2-92=A(x-4)(x^2+1)+B(x^2+1)+(Cx+D)(x-4)^2$$ then putting $x=4$ you will get $B=\dfrac{-540}{17}$ compare coefficient of $x^3$ you will get : $0=A+C$ compare coefficient of $x^2$ you will get : $-28=-4A+B-8C+D$ putting $x=0$ you will get: $-92=-4A+B+16D$ you have value of B so these eqn will become litte more easy and I hope you will take it to final from now on.
H: Differential equations basic problem I know this is a basic Physics problems but somehow I can't solve it. We have the differential equation: $2x''x^2 - 4 x^2x' - 2 x^3 = 0$ We have to conclude that the system: $x' = y $ $y' = 2y + x$ ..is equivalent to the differential equation. How can I do it? Thanks in advance! AI: If $x \neq 0$, then solving for $x^{\prime\prime}$ we get $$ x^{\prime\prime} = 2x^{\prime} + x $$ Now let $y = x^{\prime}$. Then $$ y^{\prime} = x^{\prime\prime} = 2x^{\prime} + x = 2y + x $$
H: Calculate the angle between two curves $f(x)=x^2$ and $g(x)=\frac{1}{\sqrt{x}}$ I want to Calculate the angle between two curves on their intersect $f(x)=x^2$ and $g(x)=\frac{1}{\sqrt{x}}$, what I did so far is: $$x^2=\frac{1}{\sqrt{x}} \rightarrow x=1$$then : $$\tan(a)=\left |\frac{f'(a)-g'(a)}{1+f'(a)*g'(a)}\right| $$ after set $x=1$ I get zero at the denominator. Any suggestions? Thanks! AI: $$(1)\;\;x^2=\frac1{\sqrt x}\implies x=1$$ $$(2)\;f'(1)=2\;,\;f(1)=1\implies\;\text {the tangent line to this function at the intersection point is}$$ $$y-1=2(x-1)\implies y=2x-1$$ $$(3)\;g'(1)=-\frac12\,,\,g(1)=1\implies\;\text {the tangent line to this function at the intersection point is} $$ $$y-1=-\frac12(x-1)\implies y=-\frac12x+\frac32$$ Since the product of the tangent lines' slopes is $\,-1\,$ these lines, and thus the curves...etc.
H: Convergence of random variable to a negative constant Let $X_n$ be the sequence of R.Vs and $X_n\overset{P}{\rightarrow}A$ (or $X_n\rightarrow A$ almost surely) where $A<0$ I want to prove that $Pr[X_n < 0] \rightarrow 1$ (or $X_n < A$ almost surely). The almost surely case is obvious from the definition of the convergence, but in the case in with the convergence in probability how do I see it holds from the following? (used the definition) $$ \left(\forall \varepsilon > 0\right) \lim_{n\rightarrow\infty} Pr\left[ |X_n - A| \leq \varepsilon \right] = 1$$ AI: From your definition of convergence in probability, we must have $\lim_{n\rightarrow\infty}\Pr[|X_n - A| \le \frac{|A|}{3}] = 1$ But since $A < 0$, we know that $|X_n - A| \le \frac{|A|}{3}$ implies that $X_n < 0$. So, we have: $\{|X_n-A| \le \frac{|A|}{3}\} \subset \{ X_n < 0 \}$. Thus, $\Pr[ X_n < 0] \ge \Pr[|X_n-A| \le \frac{|A|}{3}]$ for all $n$. Since the latter probability converges to $1$, we must have: $\lim_{n\rightarrow\infty}\Pr[X_n < 0] = 1$.
H: Construct a linear programming problem for which both the primal and the dual problem has no feasible solution Construct (that is, find its coefficients) a linear programming problem with at most two variables and two restrictions, for which both the primal and the dual problem has no feasible solution. For a linear programming problem to have no feasible solution it needs to be either unbounded or just not have a feasible region at all I think. Therefore, I know how I should construct a problem if it would only have to hold for the primal problem. However, could anyone tell me how I should find one for which both the primal and dual problem have no feasible solution? Thank you in advance. AI: Let $A=\left(\begin{smallmatrix} -1&0\\0&1\end{smallmatrix}\right)$, $b=\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right)=-c$. $Ax\ge b$ and $A^Ty\le c$ cannot both be satisfied with positive $x,y$.
H: Prime decomposition in ring extensions Let $R\subseteq R'$ be Dedekind domains, let $\mathfrak{p}$ be a nonzero prime ideal of $R$. Then $\mathfrak{p}R'$ is an ideal of $R'$ and it has a factorization $$\mathfrak{p}R'=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_g^{e_g}$$ in which $\mathfrak{P}_1,\ldots\mathfrak{P}_g$ are distinct prime ideals of $R'$ and $e_1,\ldots,e_g$ are positive integers. At this point, my textbook says something obscure (in my opinion). It says: note thet $\mathfrak{P_i}\cap R=\mathfrak{p}$ for each $i$. $\textbf{Thus}$ the integer $e_i$ is completely determined by $\mathfrak{P}_i$. I can't understand the sentence in itself: what does it mean that the exponent is completely determined by the ideal?? Secondly, how the previous argument should show this? Any suggestion/comment/help would be appreciated AI: For an ideal $\mathfrak{P}$ appearing the in the decomposition of $\mathfrak{p}$, one often writes the suggestive notation $\mathfrak{P} | \mathfrak{p}$. Note that this means $\mathfrak{p}$ is contained in the ideal $\mathfrak{P}$. Then the power $e$ appearing in the decomposition is the highest power of $\mathfrak{P}$ in which $\mathfrak{p}$ is contained. In other words, $\mathfrak{p} \in \mathfrak{P}^e$ but $\mathfrak{p} \notin \mathfrak{P}^{e+1}$. This is the sense in which the $e$ is determined by this decomposition.
H: Does that series converge or diverge? Does the series $$\sum \limits _{n=3}^\infty \frac{(-1)^{[\log n]}}{\sqrt{n}}$$ converge or diverge? As usually, $[x]$ denotes the integer part of $x.$ AI: Calculate the sum, $n=\lceil e^k\rceil$ to $\lfloor e^{k+1}\rfloor$. There are about $(e-1)e^k$ terms. Each has the same sign, and each has absolute value $\ge \frac{1}{\sqrt{e^{k+1}}}$. So the Cauchy Criterion for the partial sums fails, and the series does not converge.
H: Dual of holomorphic functions (with the $L^1$ topology) Let $\Omega$ be a connected domain of the complex plane, and let $E$ be the vector space of integrable holomorphic functions on $\Omega$. Then it can be checked that $E$ is a closed subspace of $L^1(\Omega)$. My question (which I'm sure has a well-known answer, but I couldn't find it) is : what is the dual of $E$ ? More precisely, does it identify with a nice (i.e. explicit) subspace of $L^\infty(\Omega)$ ? Note that since we do not endow $E$ with the usual topology of holomorphic functions (uniform convergence on compacts) this has a priori nothing to do with hyperfunctions. AI: The key term is Bergman space. The space you introduced is usually denoted $A^1(\Omega)$ or $L^1_a(\Omega)$. When $\Omega$ is a disk, the dual of $A^1(\Omega)$ can be identified with the space of Bloch functions on the disk. This is proved in the book Bergman spaces by Duren and Schuster. (By the way, in the reflexive range $1<p<\infty$ we have $(A^p)^*=A^q$ on the disk, with $p^{-1}+q^{-1}=1$.) If $\Omega$ is simply-connected and has smooth boundary ($C^{1,\alpha}$ with $\alpha>0$), then the conformal map of $\Omega$ onto the disk has derivative bounded from $0$ and $\infty$. By the change of variable formula, the map induces isomorphism between Bergman spaces on $\Omega$ and on the disk. Thus, the problem reduces to the case of the disk; the dual is again the Bloch space. For a general simply-connected domain $\Omega$, the identification of the dual of $A^p(\Omega)$ is difficult; it is related to a major open problem in complex function theory (Brennan's conjecture). As a starting point, see the paper The dual of a Bergman space on simply connected domains by Hedenmalm.
H: Math school online? As a hobbyist programmer with little to no math skills (no science skills either, thank you religious schools) I am finding math encroaching on my life more and more often. I have always laughed and said that I'm just not good at math because my brain doesn't work like that. My younger brother challenged me recently claiming that it wasn't a lack of ability that kept higher math out of my hands but a lack of interest. Because my current realm of studies is really starting to delve into 3D modeling, programming for web, mobile and desktop, I feel that a stronger math skill set would benefit me immensely. But I don't know where to start... I'm wondering if there is a math version similar to: http://www.code.org/khan-academy Some place where I can go and start challenging my brain, increasing my math as often as I have time? Some place online? I would also appreciate any tips for books or other resources that would be recommended for someone wanting to ignite an interest in math. To give a background, my highest level of math in school was pre-algebra in 7th grade. After that the schools my parents sent me to didn't teach higher math (nor evolution) so when I hit college, I had to get a tutor to pass (with a C) the College Level Algebra... I barely did it and it was so long ago I don't remember anything I learned. I hope this is the right place to ask this... I stumbled across math.stackexchange by accident. Whether this question gets answered or not I will start browsing around and seeing what all there is... AI: The obvious answer is the math section of Khan Academy! More advanced courses can be found here and here, a couple of nice ones on analysis and functional analysis by Joel Feinstein here and some brilliant ones on linear algebra/systems/optimisation by Stephen Boyd here. It is also worthwhile to check for courses here and (in the future) here. See also the answers to this question and generally those to questions with the tag (self-learning).
H: Need help in finding counterexample I need to find example that this isn't correct: Let $R_1,R_2,R_3$ be binary relations on set $A$. Prove that this is not correct: $(R_1\cup R_2)\circ R_3 \supseteq(R_1\circ R_3)\cup(R_2\circ R_3)$ AI: According to one convention for writing the composition of two relations, $\langle a,b\rangle\in(R_1\cup R_2)\circ R_3$ if and only if $$\exists x\in A\Big(\langle a,x\rangle\in R_3\land\langle x,b\rangle\in R_1\cup R_2\Big)\;.$$ But this is true if and only if $$\exists x\in A\left(\langle a,x\rangle\in R_3\land\Big(\langle x,b\rangle\in R_1\lor\langle x,b\rangle\in R_2\Big)\right)\;,$$ which in turn is equivalent to $$\exists x\in A\Big(\langle a,x\rangle\in R_3\land\langle x,b\rangle\in R_1\Big)\lor\exists x\in A\Big(\langle a,x\rangle\in R_3\land\langle x,b\rangle\in R_2\Big)$$ and hence to $\langle a,b\rangle\in(R_1\circ R_3)\cup(R_2\circ R_3)$. According to the other convention, $\langle a,b\rangle\in(R_1\cup R_2)\circ R_3$ if and only if $$\exists x\in A\Big(\langle a,x\rangle\in R_1\cup R_2\land\langle x,b\rangle\in R_3\Big)\;,$$ i.e., if and only if $$\exists x\in A\left(\Big(\langle a,x\rangle\in R_1\lor\langle a,x\rangle\in R_2\Big)\land\langle x,b\rangle\in R_3\right)\;,$$ which in turn is equivalent to $$\exists x\in A\Big(\langle a,x\rangle\in R_1\land\langle x,b\rangle\in R_3\Big)\lor\exists x\in A\Big(\langle a,x\rangle\in R_2\land\langle x,b\rangle\in R_3\Big)\;,$$ i.e., to $\langle a,b\rangle\in(R_1\circ R_3)\cup(R_2\circ R_3)$. By either convention, therefore, $$(R_1\cup R_2)\circ R_3=(R_1\circ R_3)\cup(R_2\circ R_3)\;,$$ and no such counterexample exists.
H: Age is fraction of year man dies My friend sent me a question from an olympiad and im not sure that we have followed the right method, we both did the same thing: The age of a man was 2/61 of the year in which he died. How old would he have been if he lived until 1992? Surly then he dies in 1992 and then his age is 2/61 times 1992 rounded to the nearest integer? I am unsure though, this seems too simple. AI: If you let $a$ be the age of the man in the year in which he died and $d$ be the year of his death, then $a=\frac {2d}{61}$. Since $61$ is prime and $a$ is an integer we have to have $d=61k$, in which case $a=2k$. So if, for example, $k=32$ then $d=1952$ and $a=64$, which would give an age of [ ] in $1992$. If $k=31$ then $d=1891$ and $a=62$ implying age [ ] in $1992$. To exclude this possibility (and others) you have to use a nonmathematical assumption, for example that human beings do not live beyond age $120$.
H: notation question (bilinear form) So I have to proof the following: for a given Isomorphism $\phi : V\rightarrow V^*$ where $V^*$ is the dual space of $V$ show that $s_{\phi}(v,w)=\phi(v)(w)$ defines a non degenerate bilinear form. My question : Does $\phi(v)(w)$ denote the map from $v$ to a linear function $w$? (in this case i had serious trubles in showing linearity in the second argument,really confusing. Or maybe it just means $\phi(v)$ times $w$ where $w$ is the scalar value ( we get $w$ by applying $v$ in the linear function it is mapped to) I just started today with dual spaces and try my best with the notation , but i couldn't figure it out , please if you have any idea please help me with the notation , i will solve the problem on my own. AI: $\phi:V\to V^\ast$ $\phi(v)\in V^\ast$ means that $\phi(v):V\to F$ $\phi(v)(w)$ is just the value of $\phi(v)$ at $w$.
H: Contour Integral: $\int^{\infty}_{0}(1+z^n)^{-1}dz$ I'm working through Priestley's Complex Analysis (really good book by the way) and this Ex 20.2: Evaluate $\int^{\infty}_{0}(1+z^n)^{-1}dz$ round a suitable sector of angle $\frac{2\pi}{n}$ for $n=1,2,3,...$ Can someone advise what the contour may be? If we use a sector that includes zero, surely we'll have to indent it to avoid the singularity (since it isn't a simple pole on account of the negative power). Thanks. AI: The contour $C$ is a wedge-shaped contour of angle $2 \pi/n$, as you state, with respect to the positive real axis. This contour may be broken into 3 pieces: $$\oint_C \frac{dz}{1+z^n} = \int_0^R \frac{dx}{1+x^n} + i R \int_0^{2 \pi/n} d\phi \, e^{i \phi} \frac{1}{1+R^n e^{i n \phi}} - e^{i 2 \pi/n} \int_0^R \frac{dx}{1+x^n} $$ The second integral vanishes in the limit as $R \to \infty$; in fact, it vanishes as $1/R^{n-1}$ in this limit. The rest is equal to $i 2 \pi$ times the residue at the pole at $z=e^{i \pi/n}$; note that this pole is interior to $C$ and therefore no further deformation of $C$ is necessary. The residue theorem then implies $$\left ( 1-e^{i 2 \pi/n}\right) \int_0^{\infty} \frac{dx}{1+x^n} = \frac{i 2 \pi}{n e^{i \pi (n-1)/n}}$$ The final result is $$\int_0^{\infty} \frac{dx}{1+x^n} = \frac{\pi}{n \sin{(\pi/n)}}$$
H: Cramer-Rao Lower Bound Assume that $X_1,X_2,\ldots,X_N\sim N(\mu,2^2)$ and $Y_1,Y_2,\ldots,Y_M\sim N(0,\sigma^2)$. a)Find the Cramer-Rao Lower Bound (CRLB) for the variance of the unbiased estimators of $\mu$. b)Find the CRLB for the variances of the unbiased estimators of $\mu^2$. c)Is the MLE, $\hat{\mu}$, a uniformly minimum variance unbiased estimator (UMVUE) of $\mu$? so for part a) I got $\dfrac{4}{n}$ and for part b) I got $\dfrac{\sigma^2}{n}$. Are these answers correct? Just want to know if I'm on the right track. Lastly for part C, can anyone give me some guidance on where to start? Kind of lost haha AI: So what have the $Y_i$ variables got to do with it? Anyway, your answer to part (a) looks fine. Your answer to part (b) does not. You should be able to derive part (b) from part (a). In particular, if the CRLB on $\theta$ is say $CRLB(\theta)$, and if $g(\theta)$ is some differentiable function of $\theta$ that we are interested in estimating, then, subject to some regularity conditions, the CRLB on $g(\theta)$ is: $$\left(\frac{\partial g(\theta) }{\partial \theta }\right)^2 CRLB(\theta) $$ In this case, $g(\theta)$ = $\mu^2$, and $\theta$ is $\mu$, so the rest is left to u. And since he has now solved it :) (see comment below), one can also check one's work on such things with a computer algebra system. For this problem, part (b) would simply be: $$\text{CRLB}=\frac{1}{n *\text{FisherInformation}\left[\mu ^2,f\right]}$$ where FisherInformation is a mathStatica funkeh monkeh, and $f$ is the $N(\mu,\sigma^2)$ pdf, and which returns, as output: $\frac{4 \mu ^2 \sigma ^2}{n}$
H: special case of Nagata's Lemma (Matsumura p.86) Let $K$ be a field and $R$ a valuation ring of $K$ with maximal ideal $m_R$. Let $a \in R$ such that $1-a \in m_R$. Statement: For any $s$ that is not a multiple of the characteristic of $R/m_R$, the element $(1+a+a^2+\cdots+a^{s-1})^{-1}$ is inside $R$. How do we prove the above statement? AI: Reducing mod $m_R$, the element $1 + a + \cdots + a^{s-1}$ becomes $s$, which is invertible by assumption. Since $R$ is a local ring, any element of $R$ which is invertible mod $R$ is invertible in $R$, thus $1 + a + \cdots + a^{s-1}$ is invertible in $R$.
H: inner product space and fixed vector My professor gave this question: $V$ is an inner product space, $v$ and $w$ are fixed vectors in $V$. Let $T:V\to V$ be a linear map such that $T(u)=(u,v)w$. Find $T^*$. Now I need to find the rank of $T$ and to find the matrix that represents $T$ by the standard basis of $\mathbb{C}^n$? So I found $T^*$ and according to my calculation it is $T^*(x)=\overline{(w,x)}v$. Now assuming I am not wrong with $T^*$, how can I find rank$(T)$? I know that $$\text{rank}(T)=\text{rank}(TT^*)=\text{rank}(T*T)$$ but does this help me? I do not know what $v$ and $w$ are and without that how can I calculate what $T$ is doing the vectors of the basis in order to see how the matrix would look? AI: HINT: $(u, v)$ is always a scalar. You should be able to conclude on $Rank(T)$ from that, assuming that neither $v$ nor $w$ is the zero vector.
H: Linear Algebra - Vector Spaces of Vector Spaces If we have a Vector Space such as $\Bbb R$, we can make Vector Spaces out of it. For example, let $v_1,v_2,v_3\in \Bbb R$. We know that $(v_1,v_2,v_3)$ is a Vector Space - it is $\Bbb R_3$ - and its dimension is $3\times \dim {(\Bbb R)} = 3$. Thus we can see that $\Bbb R_3$ is composed of three elements of $\Bbb R$. Any "Container Space" ($V_1$) is made of a Vector Space of elements whose elements are vectors in another Vector Space ($V_2$). Also, $(v_1,v_2,...,v_n) \oplus_1 (v'_1,v'_2,...,v'_n) = (v_1\oplus_2v'_1,v_2\oplus_2v'_2,...,v_n\oplus_2v'_n)$ and $c\odot_1(v_1,v_2,...) = (c\odot_2 v_1, c\odot_2 v_2...)$. In other words, Operators are passed to the component vectors. In general, if we have a Vector Space $V$ and a Set $C$ made from vectors in $V$, is $C$ always a Vector Space? If so, will the dimension of $C$ always be $\dim{(P)}\times\dim{(V)}$ where $P$ is the space that forms elements in the way that the Set $C$ does? I have found a few more Vector Spaces of the same type; if you want more examples, I can post them. AI: Wait a moment. I think there are some things confused here. Given $(v_1,v_2,v_3)$ for each $v_i \in \mathbb{R}$ this tuple isn't a vector space, rather it's the element of another vector space, namely $\mathbb{R}^3$. So, how do we construct new vector spaces from old ones? Well, there are two methods that I think are very interesting, and that probably will interest you. The first method which is the simpler (and which is what you are looking for, looking at your question) is the so called Direct Sum. Given a family of vector spaces $\{V_i : i \in I\}$ we define the direct sum as the result of taking the cartesian product of the $V_i$ and adding the operations componentwise as defined in each $V_i$. In other words, elements of the direct sum are of the form $(v_1, \dots, v_n)$ with each $v_i \in V_i$ and we define the operations componentwise in the sense that we have: $$(v_1,\dots, v_n)+(w_1,\dots,w_n)=(v_1+w_1,\dots,v_n+w_n)$$ $$a(v_1,\dots, v_n)=(av_1,\dots,av_n)$$ Note that the operation in the $i$-th component is the operation defined in the space $V_i$. This vector space is denoted by the symbol $V_1 \oplus \cdots \oplus V_n$ or simpler with the notation: $$\bigoplus_{i\in I}V_i$$ It's fairly easy to prove that $\dim(\oplus_{i\in I}V_i) = \sum_{i \in I}\dim V_i$ (and I invite you to do that). This is the way $\mathbb{R}^3$ is construct in truth. You can see that with this definition we have: $$\mathbb{R}^3 = \bigoplus_{i=1}^3\mathbb{R}$$ The other method is called the tensor product. It's construction is kind of sofisticated (and in truth it's motivation is only clear when we come to study multilinear constructions), but in the end it's simple when you understand it: it's meant to mimic usual products. This space is denoted by $V_1 \otimes \cdots \otimes V_n$ or simpler: $$\bigotimes_{i \in I}V_i$$ And it satisfies $\dim(\otimes_{i\in I}V_i) = \prod_{i\in I}\dim V_i$ so in some intuitive sense the tensor product is bigger than the direct sum. To read more about the construction of the tensor product consult Kostrikin's Linear Algebra and Geometry chapter 4 about Multilinear Algebra.
H: Integrate by parts: $\int \ln (2x + 1) \, dx$ $$\eqalign{ & \int \ln (2x + 1) \, dx \cr & u = \ln (2x + 1) \cr & v = x \cr & {du \over dx} = {2 \over 2x + 1} \cr & {dv \over dx} = 1 \cr & \int \ln (2x + 1) \, dx = x\ln (2x + 1) - \int {2x \over 2x + 1} \cr & = x\ln (2x + 1) - \int 1 - {1 \over {2x + 1}} \cr & = x\ln (2x + 1) - (x - {1 \over 2}\ln |2x + 1|) \cr & = x\ln (2x + 1) + \ln |(2x + 1)^{1 \over 2}| - x + C \cr & = x\ln (2x + 1)^{3 \over 2} - x + C \cr} $$ The answer $ = {1 \over 2}(2x + 1)\ln (2x + 1) - x + C$ Where did I go wrong? Thanks! AI: Starting from your second to last line (your integration was fine, minus a few $dx$'s in you integrals): $$ = x\ln (2x + 1) + \ln |{(2x + 1)^{{1 \over 2}}}| - x + C \tag{1}$$ Good, up to this point... $\uparrow$. So the error was in your last equality at the very end: You made an error by ignoring the fact that the first term with $\ln(2x+1)$ as a factor also has $x$ as a factor, so we cannot multiply the arguments of $\ln$ to get $\ln(2x+1)^{3/2}$. What you could have done was first express $x\ln(2x+1) = \ln(2x+1)^x$ and then proceed as you did in your answer, but your result will then agree with your text's solution. Alternatively, we can factor out like terms. $$ = x\ln(2x + 1) + \frac 12 \ln(2x + 1) - x + C \tag{1}$$ $$= \color{blue}{\bf \frac 12 }{\cdot \bf 2x} \color{blue}{\bf \ln(2x+1)} + \color{blue}{\bf \frac 12 \ln(2x+1)}\cdot {\bf 1} - x + C$$ Factoring out $\color{blue}{\bf \frac 12 \ln(2x + 1)}$ gives us $$= \left(\dfrac 12\ln(2x + 1)\right)\cdot \left(2x +1\right) - x + C $$ $$= \frac 12(2x + 1)\ln(2x+1) - x + C$$
H: Combination calculation with reducing set size My statistics aren't too great, so I'm struggle to work out the result of the following situation. Say you have 5 sets of 5 possible options (25 options total); and you select 1 option from each set. Each time you select 1 option from a set, that set is removed from the next round of possible options; leaving 4 sets of 5 possible options. Again, pick another option, leaving 3 sets of 5 possible options. So the selection process is to pick 5 options, one from each set; and the total amount of options reduces by 5 on each round of selection. Eg. Set 1 Option 1, Option 2, Option 3, Option 4, Option 5 Set 2 Option 6, Option 7, Option 8, Option 9, Option 10 Set 3 Option 11, Option 12, Option 13, Option 14, Option 15 Set 4 Option 16, Option 17, Option 18, Option 19, Option 20 Set 5 Option 21, Option 22, Option 23, Option 24, Option 25 So you pick Option 1 first, that leaves Sets 2-5 (20 options remaining) Then you pick Option 6, that leaves Sets 3-5 (15 options remaining) You pick Option 11, that leaves Sets 4-5 (10 options remaining) You pick Option 16, that leaves Set 5 (5 options remaining) You pick Option 21, there are no items left The order that the items are selected in is not important - but how many possible combinations does it mean you could select? The very basic maths of 25 * 20 * 15 * 10 * 5 = 375,000 Wouldn't factor in out-of-order repetition (which we don't care about). So how many combinations could there be? AI: You don't need to worry about the person choosing from the second set first, then the first, since in the end, there is exactly one option chosen from each set. Thus you can view the choices being made in order: first one option is chosen from 1-5, then one in 6-10, and so on, so that you get $5^5=3125$ possibilities.
H: probability that a random line segment parallel to the hypot. of a triangle with legs 3 and 4 will inclose an area of at least half Sorry for the unclear title. It was difficult to explain the problem in a concise way in 150 characters. A right triangle has the legs 3 and 4 units, respectively. Find the probability that a line segment, drawn at random parallel to the hypotenuse and contained entirely in the triangle, will divide the triangle so that the area between the line and the vertex opposite the hypotenuse will equal at least half the area of the triangle. Make a sketch of the triangle. I am completely stuck on this question. First of all I am assuming that the legs of length 3 and 4 are the ones containing the right angle but I am not sure if this is even right. If this is the case then a line segment drawn at random will divide the area of the triangle if it intersects the two legs (call them $a$ and $b$) at exactly $\frac{1}{\sqrt{2}}$ of their length. This way both the height and width of the smaller triangle will be divided by $\frac{1}{\sqrt{2}}$ and thus the new area will be $\frac{ab}{2}$ which is half of $ab$, the original area. How do I extend this idea to answer the question fully? Is the probability just $1-\frac{1}{\sqrt{2}}$? Thanks in advance! AI: Having drawn a line parallel to the hypotenuse, the smaller triangle will be proportional to the larger triangle. We will assume that 'at random' means the value of the proportionality constant is chosen uniformly between $0$ and $1$. As you've noted, if it is anything larger than $\frac{\sqrt2}{2}$, the area will be at least half of the original area. Hence, the desired probability is indeed $1-\frac{\sqrt{2}}{2}$.
H: Derivative of conjugate transpose of matrix Building off of my previous question, I am trying to derive the normal equations for the least squares problem: $$ \min_W \|WX - Y\|_2 \\ W \in \mathbb{C}^{N \times N} \quad X, Y \in \mathbb{C}^{N \times M} $$ The intuitive way of viewing this problem is that I am trying to predict a vector $y$ (of length $N$) from a corresponding $x$ vector using a matrix $W$, and to estimate $W$ I have multiple ($M$, to be precise) realizations of $x$ and $y$ packed into matrices $X$ and $Y$. I am trying to define this in terms of a least-squares problem and derive the normal equations myself, but I'm running into issues in taking the derivative. To spell it out explicitly, I can re-state the above equation as: $$ \min_W (WX - Y)^H (WX - Y) \\ = \min_W (X^H W^H - Y^H) (WX - Y) \\ = \min_W (X^H W^H W X - Y^H W X - X^H W^H Y + Y^H Y) $$ Now typically I would take the derivative with respect to $W$, set it equal to zero, and solve for $W$. However, my matrix calculus is rusty and everything I know is basically summed up on this webpage, where it explicitly states: Note that the Hermitian transpose is not used because complex conjugates are not analytic. Now, because of Michael C. Grant's answer I feel there must be some way of doing this, but I am at a loss as to how. Thank you all in advance! AI: In the first version of my answer I overlooked the fact that you defined the $L_2$ error in a way that is not suitable for matrices. For vectors it would have been OK, but if your complex error is $E=WX-Y$, then $E^HE$ is not the error that you want to minimize, because $E^HE$ is a matrix (if $E$ were a vector, then $E^HE$ would be a scalar and you could minimize it). The appropriate definition of the (scalar) error measure is given by the Frobenius norm $$\epsilon=\|E\|_F^2=\|WX - Y\|_F^2=\\ =\text{trace}\left ( E^HE\right)=\text{trace}\left ( (WX-Y)^H(WX-Y)\right)=\\ =\text{trace}\left ( X^HW^HWX - X^HW^HY - Y^HWX +Y^HY\right)$$ Now we can apply the trick of taking the conjugate complex derivative (reference) w.r.t. $W^H$: $$\frac{\partial\epsilon}{\partial W^H} = WXX^H - YX^H$$ Setting the derivative to zero gives the solution $$W=YX^H(XX^H)^{-1}$$ assuming $(XX^H)^{-1}$ exists. EDIT: just an additional simple example to show how the trick with the conjugate derivative works when solving for an unknown complex vector (instead of a matrix). In this case we get a well-known result: Assume we have an overdetermined system of complex linear equations $$Ax=b$$ where $A$ is $m\times n$, $m>n$, $x$ is $n\times 1$, and $b$ is $m\times 1$. Minimizing the squared error $$\epsilon=(Ax-b)^H(Ax-b)=x^HA^HAx - x^HA^Hb-b^HAx+b^Hb$$ by taking the derivative w.r.t. $x^H$ and equating it with zero gives $$A^HAx-A^Hb=0$$ which leads to the well-known Moore-Penrose pseudoinverse: $$x=(A^HA)^{-1}A^Hb$$ EDIT 2: Let's have a look at two little examples showing that it is correct to regard a complex variable $z=z_R+iz_I$ as constant when taking the derivative w.r.t. $z^*$. The conjugate derivative is defined as (ref.) $$\frac{\partial f}{\partial z^*}=\frac{1}{2}\left [\frac{\partial f}{\partial z_R} +i\frac{\partial f}{\partial z_I} \right]$$ For $f=z$ we get $$\frac{\partial f}{\partial z^*}= \frac{1}{2}\left [\frac{\partial (z_R+iz_I)}{\partial z_R} +i\frac{\partial (z_R+iz_I)}{\partial z_I} \right]=\frac{1}{2}(1+i^2)=0$$ For $f=zz^*$ we have $$\frac{\partial f}{\partial z^*}= \frac{1}{2}\left [\frac{\partial (z_R^2+z_I^2)}{\partial z_R} +i\frac{\partial (z_R^2+z_I^2)}{\partial z_I} \right]=\frac{1}{2}(2z_R+2iz_I)=z$$
H: Find the value of $x^3-x^{-3}$ given that $x^2+x^{-2} = 83$ If $x>1$ and $x^2+\dfrac {1}{x^2}=83$, find the value of the expression$$x^3-\dfrac {1}{x^3}$$ a) $764$ b) $750$ c) $756$ d) $760$ In this question from given I tried to approximate the value of $x$ which should just above to 9 then I tried to calculate the value of cubic expression but all options are close enough to guess. Any idea to solve it? AI: First, notice that $$\left(x-\frac{1}{x}\right)^2=x^2-2+\frac{1}{x^2}=83-2=81$$ Then, use the difference of two cubes formula: $$x^3-\frac{1}{x^3}=\left(x-\frac{1}{x}\right)\left(x^2+1+\frac{1}{x^2}\right)=9\cdot(83+1)=756$$ We take the positive root of $81$ because $x>1$.
H: Linear Algebra - Another way of Proving a Basis? If we have a Vector Space $V$, can we prove that a set $S$ of vectors $\in V$ is a basis of $V$ given that: $S$ contains the same number of vectors as $\dim{(V)}$. Every vector in a basis of $V$ can be written as a linear combination of the vectors in S Example: Let $V$ be $\Bbb R_3$. The Standard Basis of $\Bbb R_3$ is $\{b_1,b_2,b_3\}=\{(1,0,0),(0,1,0),(0,0,1)\}$. Let $S$ be $\{v_1,v_2,v_3\}=\{(1,0,0),(1,1,0),(1,1,1)\}$. Then: $$ \begin{align} v_1 = b_1 \\ v_2 - v_1 = b_2 \\ v_3 - v_2 = b_3 \end{align} $$ So: $$ \begin{align} c_1b_1+c_2b_2+c_3b_3 = (a,b,c) \\ c_1(v_1)+c_2(v_2-v_1)+c_3(v_3-v_2) = (a,b,c) \\ (c_1-c_2)v_1 + (c_2-c_3)v_2 + c_3v_3 = (a,b,c) \end{align} $$ therefore, since $\{b_1,b_2,b_3\}$ is independent (let $a = b = c = 0$) and spanning, $S$ is also independent and spanning so $S$ is a basis of $V$ If a set $S$ satisfies the before-mentioned conditions, is it a basis? Edit: in response to Andres Caicedo, yes, $V$ is finite dimensional. AI: Yes, every spanning set contains a basis: you just remove vectors that can be written as a linear combination of the others. So we can remove vectors from $S$ to get a basis. But the resulting basis must have $\dim V$ vectors and that's how many vectors $S$ has. Therefore we removed $0$ vectors to get the basis. The basis is $S$. Similarly if $|S| = \dim V$ and $S$ is a linearly independent set then $S$ is a basis.
H: Book Suggestions for an Introduction to Measure Theory Couldn't find this question asked anywhere on the site, so here it is! Do you guys have any recommendations for someone being introduced to measure theory and lebesgue integrals? A mentor has suggested a book that's in french, but unfortunately I don't know french (heck I barely know english) - so english books only please! Thanks in advanced! EDIT: Did not realize this had been asked here but I'm going to leave this question open to see if there are any newer books (that question was originally asked in 2011). If I am doing something wrong here, just give me a gentle shoutout in the comments and I'll be understanding and close this AI: You might want to take a look at Schillings Measures, Integrals, and Martingales. It's a great introductory text for Measure Theory, gentle, but rigorous. The author's website has solutions to the book, as well as Errata, etc. You can take a look at the table of contents in the link given above.
H: Why isometric isomorphic between Banach spaces means we can identify them? Is the "isometric" part really necessary? For what reason is that? Eg. we prove that there is an isometric isomorphism between $(L^p)'$ and $L^q$ ($(p,q)$ conjugate) and then we identify them together as the same space. If they were isomorphic but not necessarily isometric, could we not identify them? AI: Each mathematical theory study their own objects and maps between them. In our case these are Banach spaces and bounded linear maps. We choose linear maps because we want them to preserve linear structure of Banach spaces. We choose them bounded to nicely interact with the norm structure of Banach spaces. Once we choose our objects and transformation between them we want to classify our objects. There mathematicians start to invent different properties which can be used for classification. The more properties two Banach spaces have in common the more they are similar. Now we want to know what are the maps that preserve a particular property. The best maps in Bаnach space theory are isomorphisms, i.e. bijective bounded linear operators with bounded inverse isometric isomorphisms, i.e. isomorphisms that preserve norms The first type of isomorphisms preserve the following properties complementability separablity completeness type/cotype reflexivity and many other that depend on the topology generated by the norm. Since topology of Banach spaces can be generated by different norms isomorphisms of Banach spaces can not catch all properties of the norm structure of Banach space. In this case the second type of isomorphisms will be useful. They preserve all properties mentioned above plus 1-complementabilty extreme points distances between susbsets and others that depend on the metric generated by norm. These types of isomorphisms give two ways of identification of Banach spaces - tolerant and more rigorous. For example $\ell_1^n$ and $\ell_2^n$ are two $n$-dimensional Banach spaces that isomorphic but not isometrically isomorphic. Note that isomorphisms of Banach spaces catch only those properties of spaces that correspond to their Banach space structure. For example the spaces of convergent sequences $c$ is isomorphic as Banach space to the space $c_0$ of sequences converging to $0$. But as Banach algebras (Banach spaces with continuous multiplication of vectors) they are not isomorphic, because $c$ is unital and $c_0$ is not.
H: prove by induction that $P\left(\bigcup\limits_{i=1}^{n} E_i\right) = 1-\prod\limits_{i=1}^{n}(1-P(E_i))$, $E_1,E_2,\ldots , E_i$ independent Suppose $E_1,E_2,\ldots , E_i$ are independent events. prove by induction that $$P\left(\bigcup\limits_{i=1}^{n} E_i\right) = 1-\prod\limits_{i=1}^{n}(1-P(E_i))$$ The first step is easy. For $n=1$ we have $$P\left(\bigcup\limits_{i=1}^{1} E_i\right) = P(E_1)= 1-(1-P(E_i))=1-\prod\limits_{i=1}^{1}(1-P(E_i))$$ I don't really see how to continue from here though. Any help would be appreciated! AI: I figured it out so I thought I might as well post it. We note that: $$P\left(\bigcup\limits_{i=1}^{1} E_i\right) = P(E_1)= 1-(1-P(E_i))=1-\prod\limits_{i=1}^{1}(1-P(E_i))$$ Now suppose for a certain $k \in \mathbb{N}$ that: $$P\left(\bigcup\limits_{i=1}^{k} E_i\right) = 1-\prod\limits_{i=1}^{k}(1-P(E_i))$$ So $$P\left(\left(\bigcup\limits_{i=1}^{k} E_i\right)^c\right)=\prod\limits_{i=1}^{k}(1-P(E_i))$$ Now for $k+1$ we have: $$P\left(\bigcup\limits_{i=1}^{k+1} E_i\right)=P\left(E_{k+1}\cup\left(\bigcup\limits_{i=1}^{k} E_i\right)\right)$$ Now we use that $P(A\cap B) = P(A)P(B)$ for independent $A$ and $B$, and the fact that if all $E_i$ are independent then so are the events $E_i^c$: $$P\left(E_{k+1}^c\cap (\left(\bigcup\limits_{i=1}^{k} E_i\right)^c\right)=(1-P(E_{k+1}))\prod\limits_{i=1}^{k}(1-P(E_i))=\prod\limits_{i=1}^{k+1}(1-P(E_i))$$ Finally we use the fact that $P(A ^c\cap B^c)=P(A\cup B)^c$ so $$P\left(E_{k+1}^c\cap (\left(\bigcup\limits_{i=1}^{k} E_i\right)^c\right) = 1 - P\left(E_{k+1}\cup\left(\bigcup\limits_{i=1}^{k} E_i\right)\right)=1 - P\left(\bigcup\limits_{i=1}^{k+1} E_i\right)$$ Combining these results we have $$1 - P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) = \prod\limits_{i=1}^{k+1}(1-P(E_i))$$ or $$ P\left(\bigcup\limits_{i=1}^{k+1} E_i\right) = 1-\prod\limits_{i=1}^{k+1}(1-P(E_i)).$$ Now by induction we have $$ P\left(\bigcup\limits_{i=1}^{n} E_i\right) = 1-\prod\limits_{i=1}^{n}(1-P(E_i))$$ for all $n \in \mathbb{N}$.
H: 4 equations with set of numbers I must make 1 addition (x+y=z), 1 subtraction (x-y=z), 1 multiplication (x*y=z), and 1 division (x/y=z) equation with the following numbers. All the numbers must be used to fill x, y, and z of each equation. x, y, and z can consist be 1, 2, or 3-digit numbers. 0-4 available 1-9 available 2-1 available 3-2 available 5-1 available 6-1 avilalble 7-0 available 8-1 available 9-1 available Thanks! AI: There is no magic here, just guess and try. I find $11+18=29, 11-6=5, 100*0=0,31/1=31$ as one of what must be many solutions.
H: Minimizing Mean Squared Error for Exponential Function I have a function that I'm trying to model using an exponential function and I'm trying to determine the constants for the exponential. I know I could optimize it using trial-and-error in R or another language, but I'd like to learn an analytic solution. I figured that minimizing mean squared error would be the way to go about this, so I have: $$\underset{r,k}{\operatorname{argmin}}\sum_{t=0}^T (s(t)-\hat{s}(t\mid r,k))^2$$ Where $s(t)$ is the function that I'm trying to model and $\hat{s}(t)$ is $(1+r)^{t+k}$ . The only constraint I could come up with was $r>0$. I suppose I could also assume $k>0$ for now. I learned about Kuhn-Tucker constraints as an extension to Lagrange constraints (which I already know) but I wasn't able to solve it. Am I even going about this the right way? If I am, how can I solve this problem? Thanks in advance! AI: Your setup is fine. This sort of problem will not (usually) have an analytic solution. You have a two-dimensional non-linear minimization problem. There are many numeric routines that can solve this in libraries, and they are discussed in any numerical analysis text. They really consist of informed trial and error, where the informed part comes from keeping track of past trials to build up information about the error function.
H: Find the value of $\frac{\tan\theta}{1-\cot\theta}+\frac{\cot\theta}{1-\tan\theta}$ I want to know an objective approach to solve these type of expression in a quick time Which of the expression equals to $$\dfrac{\tan\theta}{1-\cot\theta}+\dfrac{\cot\theta}{1-\tan\theta}$$ a)$1-\tan\theta-\cot\theta$ b)$1+\tan\theta-\cot\theta$ c)$1-\tan\theta+\cot\theta$ d)$1+\tan\theta+\cot\theta$ I've tried it several ways like taking LCM,change whole into $\sin\theta$ and $\cos\theta$.but I've stuck. AI: Just to add the "cheating" method of solving these (say on an exam when you're pressed for time): Use the symmetry to show that only options (a) and (d) are viable. Then plug in some value (say $x=\pi/8$), to find which one is of them is "right"
H: Property of natural numbers involving the sum of digits How can you prove that every natural number $M$ or $M+1$ can be written as $k + \operatorname{Sum}(k)$, where $\operatorname{Sum}(k)$ represents the sum of the digits of some number k. Example: $$ 248 = 241 + \operatorname{Sum}(241) = 241 + 2 + 4 + 1$$ AI: Hint: Each term in the sequence $k + \mbox{Sum}(k)$ either increases by 2, or decreases by some amount. Why does that tell you that the image of $k + \mbox{Sum}(k)$ must include either $M$ or $M+1$? Proof of Hint: If the last digit is not 9, then $k$ and $\mbox{Sum}(k)$ will both increase by 1, hence their sum increases by 2. If the last digit is 9, then $\mbox{Sum}(k)$ will decrease by at least 8, hence their sum decreases by sum amount.
H: Blackjack card counting, with one whole deck should the "count" end on zero? When playing blackjack if you are card counting for a single deck, should the count always come to zero at the end of the deck? Wouldn't it depend on strategy and a corresponding betting table? Would the optimum card counting strategy end you with a count of zero for one deck? AI: It depends on the system of card counting. With the Hi-Lo count (the most common), the count should sum to $0$ after every single card in the deck has been played. 2 3 4 5 6 = +1 (low cards) 7 8 9 = 0 (neutral cards) 10 J Q K A = -1 (high cards) Using this chart, it is easy to see that there is an equal number of $+1$ cards and $-1$ cards. Over the course of the deck, there will be a total of $20$ low cards and $20$ high cards (along with $12$ neutral cards). This means that the count at the end must equal zero. Several of the more advanced systems, however, will not have a balance between positives and negatives, and they will then have a non-zero count at the end of the deck.
H: A tricky probability question. I have been asked the following question, and unfortunately I have no idea how to proceed. Here is the question: Suppose we have 99 empty papers and we wrote numbers from 1 to 99(using each number) on one side of the papers randomly. Then we mixed all the papers randomly and started to write numbers from 1 to 99 on the other sides (empty sides) of each paper randomly. What is the probability of having the same number on both sides of at least one paper? Thanks! AI: Hint: Derangements. Hint: Complement.
H: Finite abelian $2$-group If $G$ is a finite abelian group such that $o(x)=2$ for all $x \neq e$ and $|G|=2^n$ for some $n\in\mathbb N$, prove that $G \cong \mathbb{Z}_2\times\cdots\times\mathbb{Z}_2$ ($n$ factors). Any help is appreciated. I asked this question before and was suggested to try induction but I haven't been able to write down the inductive proof properly. If anybody can help that will be really nice. Thanks! AI: The proposition is true for $n=1$. For $n\geq 2$, take $g_1 \in G$ and let $G_1=\langle g_1\rangle$. For $i=2,\dots, n$ take $g_i \in G \text{ \ } G_{i-1}$ where $G_{i-1}=\langle g_1,\dots,g_{i-1}\rangle$. By the induction hypothesis (or directly), for $i < n, G_i \cong \mathbb{Z}_2^i$. In particular $G_{n-1} \cong \mathbb{Z}_2^{n-1}$. By construction $g_n \in G\text{ \ }G_{n-1}$, so $\langle g_n \rangle \cap G_{n-1}=\{e\}$, they are both normal since $G$ is abelian and every element in $G$ can be written as a product of elements from $\langle g_n\rangle$ and $G_{n-1}$. (You can see this by noting that the left cosets of $G_{n-1}$ are $G_{n-1}$ and $g_nG_{n-1}$ and partition the set.) Thus $G \cong \langle g_n \rangle \times G_{n-1} \cong \mathbb{Z}_2\times\mathbb{Z}_2^{n-1}\cong\mathbb{Z}_2^n$ as required. This is a rather brute force approach, but there's only one idea behind it i.e. "keep pulling out $\mathbb{Z}_2$ until there's nothing left". The solution is much simpler if you know about modules (a generalisation of vector spaces) or by treating it as a linear algebra problem (as egreg does below), but there's nothing wrong with this elementary approach.