text
stringlengths 83
79.5k
|
---|
H: Circle Chord Sequence
This is my first post, so be nice!
When I was in my first Geometry class in high school, I asked the teacher the following:
Given a circle of radius 2a, find the length of the chord running parallel to the diameter of the circle such that the semicircle cut by the chord is divided into two regions of equivalent area.
The teacher looked at me as if he knew the answer but then paused, puzzled, and told me that we should work on it after class. So we did, and it was at this moment that I was introduced to trigonometry. Yet we did not find a solution and the problem was filed into the back of my memory for some time. A year and a half later, at the end of Algebra 2/Trig, the problem re-entered my awareness and I took another crack at it. After maybe an hour of work I found the solution! I was so excited that I asked my teacher if I could present it to the class on the second to last day of school, and she consented. So I presented it, but crap I forgot my notes and I screwed up the work leading to the solution and embarrassed myself in front of everybody. I cleaned it up after class when I had more time, but by then only the real math enthusiasts were left. Anyway, that was completely tangential, as I would now like to pose the question that I came here with:
Skip to here if you don't care about anything but mathematics:
Given a circle of radius 2a, find the sequence of real numbers given by the length of successive iterations of slicing the segment resulting from a chord which runs parallel to the diameter of the circle such that the area of the segment is halved with each successive slice.
Oh, and the answer to the first question is something like 0.71... or 0.79..., I have forgotten by now. Bonus points to the most elegant solution of the first problem.
AI: Wlog. the radius is $1$. The area of the cap subtending angle $\alpha$ from the center is $A(\alpha)=\frac12(\alpha-\sin\alpha)$. The corresponding segment length is $2\sin\frac\alpha2$. You are asking for the sequence $(x_n)_{n=1}^\infty$ such that $x_n=2\sin\frac{\alpha_n}2$ and $A(\alpha_n)=\frac\pi2\cdot2^{-n}$. This can only be solved numerically.
The first few approximations (after $x_0=2$) are:
$$1.8295420351460716385409955906837076169, 1.5455097261234120202206332289590795207, 1.2678613848545969240544214118382796268, 1.0255171578787055775354919807723448528, 0.82317890720583251879913780639731025539, 0.65785871279582689026079790969165393609, 0.52435985537171345597016857982005882625, 0.41728294853497515917670605958970406336, 0.33174364554487851240387578246924654023, 0.26357709084095859727510715888869346114, 0.20933695838483619922846766000771974415, 0.16621859822603799414074748448500084143, 0.13196163045501028275182369205869205768, 0.10475492671579429585769886806519067206, 0.083152493022244344061029851545052090070, 0.066002402792776475648886352412199553896, 0.052388254185767766597240438200662186974, 0.041581640994898317827871218807935473777, 0.033003898345400309144536032806404103535, 0.026195475475160737053195360050566833997.$$
For big $n$ (that is small $\alpha$) one can employ the approximation $A(\alpha)\approx \frac1{12}\alpha^3\approx\frac1{12}x^3$ (from $\sin x=x-\frac16x^3+\frac1{120}x^5\mp\ldots$), hence
$$x_n\approx \sqrt[3]{12\cdot\frac\pi2\cdot 2^{-n}} =\frac{\sqrt[3]{6\pi}}{(\sqrt[3]2)^n}.$$
For $n=20$, this approximation gives us $0.02619592\ldots$ instead of the last value in the list above. |
H: Compatibility of pointwise and distributional convergence
Let $\Omega$ be an open subset of $\mathbb{R}^n$ and let $u_k,\, u$ and $v$ be elements of $L^1_{\mathrm{loc}}(\Omega)$. Assume that
\begin{equation}
\begin{array}{ccc}
u_k(x)\to u(x)\quad x\text{-a.e.} &\text{and}& u_k\to v\quad \text{in }\mathscr{D}'.\end{array}
\end{equation}
Does it follow that $u=v$ almost everywhere?
As it is usual, the convergence $u_k\to v$ in $\mathscr{D}'$ is defined as follows:
$$\int_{\Omega}u_k(x) \phi(x)\, dx \to \int_{\Omega}v(x)\phi(x)\, dx,\qquad\text{for all } \phi \in C^{\infty}_c(\Omega).$$
The point of this question is to show that there is a minimum degree of "compatibility" between the two notions of convergence, even if neither of the two implies the other.
NOTE CAREFULLY. This question has already received an excellent answer by Chris Janjigian, which will be awarded the bounty.
AI: Unless I have read this wrong, the answer is no. For example, consider $k\chi_{[0,\frac{1}{k}]}$. This sequence of functions converges pointwise to zero everywhere except at zero, but converges in the sense of distributions to the Dirac delta at zero.
Edit: Even if we assume that $v$ is in $L^1_{loc}$ the result is still not true. I am copying a large chunk of this answer from my solution to a problem in Rudin Real and Complex Analysis. The result proven here is that there exists a sequence of continuous functions converging to 0 pointwise $a.e.$, but converging to $1$ in the sense that $\langle g_n, \varphi \rangle \to \langle 1, \varphi \rangle$ for all $\varphi \in C[0,1]$.
To prove this, we will exploit the fact that Riemann integrals converge for continuous functions and, when they exist, coincide with Lebesgue integrals. It therefore suffices to show that we can produce a sequence measures which correspond to Riemann sums. The standard endpoint Riemann sum of mesh $\frac{1}{n^{2}}$ can be constructed by partitioning $[0,1]$ into intervals of length $\frac{1}{n^{n}}$ and, at the point $\frac{kn^{n-2}}{n^{n}}=\frac{k}{n^{2}}$, $1<k\leq n^{2}$ intervals placing a weighted Dirac delta $\frac{1}{n^{2}}\delta(x-\frac{k}{n^{2}})$ at the endpoint. Notice that the fact that we are using endpoint Dirac measure here does not actually matter since Riemann sums of continuous functions converge for any arbitrary point in the interval. We modify the above example slightly so that instead of using distributions we use continuous functions:
Keep the same partition, but in the interval $[\frac{kn^{n-2}-1}{n^{n}},\frac{k}{n^{2}}]$, draw a triangle (the explicit formula for this function is easy to produce and not terribly informative, so I exclude it) from $0$ to $2n^{n-2}$ and back down (with the vertex occurring at the midpoint). So $g_{n}$ is $0$ on $[\frac{k}{n^{2}},\frac{k+1}{n^{2}}-\frac{1}{n^{n}}]$ with a big spike just before every multiple of $\frac{1}{n^{2}}$. Integrating a function over this interval with respect to the measure induced by this function gives $\frac{1}{n^{2}}$ times a weighted average of the function over these intervals. This follows from continuity: for any fixed $f$ there exists $a,b\in[\frac{kn^{n-2}-1}{n^{n}},\frac{k}{n^{2}}]$ so that for all $x\in[\frac{kn^{n-2}-1}{n^{n}},\frac{k}{n^{2}}]$, $f(a)\leq f(x)\leq f(b).$ Then $$\frac{1}{n^{2}}f(a)\leq\int_{[\frac{kn^{n-2}-1}{n^{n}},\frac{k}{n^{2}}]}g_{n}fdm\leq\frac{1}{n^{2}}f(b)$$ and therefore there exists $x_{k}\in[\frac{kn^{n-2}-1}{n^{n}},\frac{k}{n^{2}}]$ so that this integral equals $\frac{1}{n^{2}}f(x_{k})$ by the mean value theorem. By construction, these functions integrate to $1$ (there are $n^{2}$ triangles each of which integrates to $\frac{1}{n^{2}})$. We can see from the above argument that for continuous functions the functional $f\to\int fg_{n}dm$ is in fact an operator sending $f$ to a Riemann sum of mesh $\frac{1}{n^{2}}$. Then for any continuous function the measures $g_{n}dm$ converge in the weak$^{*}$ sense to $m$. All that remains to be shown is that pointwise almost everywhere, these functions converge to $0$.
Define $A_{n}=\{x:g_{n}(x)>0\}$ and observe that $m(A_{n})=\frac{1}{n^{n-2}}$ (there are $n^{2}$ sets of measure $\frac{1}{n^{n}}$ on which $g_{n}$ is positive) which is summable. By the Borel Cantelli Lemma $\mu(\{x:g_{n}(x)\neq0\}\mbox{ for infinitely many $n$})=0$.
It follows that $g_{n}\to0$ a.e. $\{m\}$. |
H: Hyperbolic Geometry - reference request
I need some information about Hyperbolic Geometry. For example, Spherical Geometry is a subsection of Hyperbolic Geometry or no?
Can you suggest to me a book or some other reference to help me better understand these notions?
Thanks a lot!
AI: You might want to start with this very nice ~$60$ page pdf: Hyperbolic Geometry - from Flavors of Geometry by Cannon, Floyd, Kenyon, and Parry. (MSRI Publications Volume 31, 1997.)
See also references and external links available in the Wikipedia entry for Hyperbolic Geometry. The entry itself is informative, and you'll quickly see that there is not a hyperbolic geometry, but rather, four or five models of hyperbolic geometries that share some fundamental characteristics. Compare and contrast with the Wikipedia entry for Spherical Geometry, which is not a proper "subsection" of hyperbolic geometries, though it is a non-Euclidean Geometry.
You can't go wrong, of course, with Euclidean and Non-Euclidean Geometries: Development and History by Marvin J. Greenberg. |
H: Horizontal bar notation for isomorphisms or bijections
I have seen in many books, particularly on category theory, the use of an horizontal bar to indicate some sort of equivalence, but I have not seen a proper definition in any context.
For example:
$X \to Y^T$
_________
$T \times X \to Y$
(Excuse me for the bad formatting.)
The explanation given in the book is that a map $X \to Y^T$ is the same as a map $T \times X \to Y$. What kind of sameness is this? Is there a bijection between the two collections of maps, an isomorphism, a unique isomorphism? Or does the meaning depend on the context, and it's a generic notation for equivalence?
Thanks.
I should add: I understand the intuitive meaning, but I've seen proofs using only chains of these equivalences (for example, to prove that $Y^{T+S}\cong Y^T\times Y^S$), but I'm not entirely sure of their rigor.
AI: User69810 has given a good explanation of the informal intent of the notation, "informal" in the sense that it refers to "doing" things and to "equivalence" between such doings. For a formal explanation, I would say that the notation means two or even three things. First, it means that there is a bijection between entities of the sort indicated above the line and entities of the sort indicated below. Second, it indicates that the author has a particular such bijection in mind, and that either the reader is supposed to guess (correctly) which bijection this is or the author will explain, preferably in the next few lines of text. Finally, if you're willing to include some category-theoretic ideas, it means that the bijection is natural. (More details about this naturality: In most cases, including the one in your question, the entities above and below the horizontal line are not merely sets but functors of the relevant variables. In your case, you have functors covariant in $Y$ and contravariant in $X$ and $T$. The notation is usually understood to mean that the intended bijections, one for each choice of $Y,X,T$, constitute a natural isomorphism between these functors.) |
H: Grothendieck group of a symmetric monoidal category is a lambda ring?
I understand that taking the Grothendieck group of a braided monoidal (abelian) category gives us a commutative ring and that taking that of a symmetric monoidal (abelian) category gives us a $\lambda$-ring. Now, I have simply seen this (latter) fact stated on the internet by reputable mathematicians, but I have not been able to find a reference explaining how one gets all that extra structure ($\lambda$-rings seem quite complicated!) from one extra property that symmetric monoidal categories have.
I was wondering if anybody could point me in the right direction or shed some light on the details?
Thanks!
AI: The $\lambda$-structure is given by taking exterior powers. This is the main motivation I know for defining $\lambda$-rings in the first place. (You need an action of $S_n$ on an $n^{th}$ tensor power $V^{\otimes n}$ to define the exterior power, which is what being symmetric monoidal gets you; in the braided monoidal case you only get an action of $B_n$.) |
H: show union of two intervals is not connected
Let $X$ be a topological space. $X$ is connected if $X\neq U\cup V$ with open sets $U,V$ and $U\cap V=\emptyset$.
If you consider $A:=(0,1]\cup(2,3)\subset\mathbb R$, $A$ is not connected.
But how can you prove it? Clearly I have to find those open sets like above but how? Thanks!
AI: Just take $U=(0,1]$ and $V=(2,3)$: those sets are both open in $A$. To see that $U$ is open in $A$, observe that it’s equal to $A\cap(0,2)$, where $(0,2)$ is open in $\Bbb R$. |
H: Very symmetric convex polytope
Let $C_n$ be the convex polytope in ${\mathbb R}^n$ defined by the inequalities
(in $n$ variables $x_1,x_2, \ldots ,x_n$) :
$$
x_i \geq 0, x_i+x_j \leq 1
$$
(for any indices $i<j$).
Denote by $E_n$ the set of extremal points of $C_n$. We have a natural action
of the symmetric group $G={\mathfrak S}_n$ on $C_n$, and hence on $E_n$ also. So we have
a quotient set $\frac{E_n}{G}$. Here are some questions about it, in decreasing order
of difficulty :
1) Is a simple description of $\frac{E_n}{G}$ known in general ?
2) What is the asymptotic behaviour of the sequence $(|\frac{E_n}{G}|)_{n \geq 2}$ ?
3) Is the sequence $(|\frac{E_n}{G}|)_{n \geq 2}$ bounded ?
AI: Perhaps I'm overloooking something, but it seems to me that the extreme points of $C_n$ are of three sorts. (1) The zero vector. (2) The standard unit vectors, with a $1$ in a single component and zeros in the other $n-1$ components. (3) Vectors with the entry $1/2$ in some set of three or more components and zeros in the remaining components. If that's right, then there are a single $G$-orbit for (1), another for (2), and $n-2$ orbits for (3). Each orbit for (3) is characterized by the cardinality, between $3$ and $n$ inclusive, of the set of coordinates where $1/2$ occurs. |
H: Projections onto closed and convex sets
I have to prove that if $A$ is convex and closed set, then $z=P_A(x)$ for all $z\in A$ if and only if $\langle x-z, z-y\rangle \geq 0$ for all $y\in A$
I have following proof which is not much complicated, but I don't understand few things.
If $g(\theta)=||x-((1-\theta)z+\theta y)||^2, \theta \in R, z=P_A(x), y\in A$ is quadratic function of the variable $\theta$ and it has minimum at $\theta =-\frac{\langle x-z,z-y\rangle}{||z-y||^2}$
Now there is a part that I don't understand:
For $z=P_A(x)$, from convexity of a set A, we get $g(0)\leq g(\theta)$ for all $\theta \in [0,1]$, so $\theta_{min} \leq 0$.
I know why $g(0)\leq g(\theta)$ (I can see it by simply putting $0$ in function), but I don't know how convexity of $A$ caused that and why did we take $\theta$ from $[0,1]$.
The rest of the proof is ok.
Would anybody try to make this clear to me?
AI: If $z$ is the projection of $x$ to $A$, then it's the closest point to $x$ in $A$. If $y$ is any other point in $A$ and $0\leq\theta\leq 1$, then $(1-\theta)z+\theta y$ is in $A$ because $A$ is convex. (Notice that this would not follow if $\theta$ were outside $[0,1]$.) So the distance from $x$ to this point $(1-\theta)z+\theta y$ in $A$, must be at least the distance from $x$ to $z$. Square both sides of that inequality (because squared distances are algebraically nicer than distances in Euclidean space), and you get $g(\theta)\geq g(0)$. So $g(\theta)$, with $\theta$ restricted to $[0,1]$, takes its minimum value at $\theta=0$. |
H: Vector analysis. Del and dot products
I am trying to prove that
$$\nabla(\mathbf{A} \cdot \mathbf{B}) = (\mathbf{A} \cdot \nabla)\mathbf{B} + (\mathbf{B} \cdot \nabla)\mathbf{A} + \mathbf{A} \times (\nabla \times \mathbf{B}) + \mathbf{B} \times (\nabla \times \mathbf{A})$$
I've gotten as far as $\nabla(\mathbf{A} \cdot \mathbf{B}) = \nabla A\cdot B+\nabla B\cdot A$, using subscript summation. I don't know how to proceed.
This is part of proving that
$$\frac{Dv}{Dt}=\frac{\partial v}{\partial t}+\nabla(\frac{v^2}{2})-v\times(\nabla\times v)$$
AI: Let's start with the right hand side, we have, considering the $i$th component
\begin{align*}
&\!\!\! [(A \cdot \nabla) B + (B \cdot \nabla)A + A \times (\nabla \times B) + B \times (\nabla \times A)]_i\\
&= \delta^{jk}A_j\partial_kB_i + \delta^{jk}B_j\partial_kA_i + \epsilon^{jk}_{\;\;i}A_j\epsilon^{\mu\nu}_{\;\;k}\partial_\mu B_\nu + \epsilon^{jk}_{\;\;i}B_j\epsilon^{\mu\nu}_{\;\;k}\partial_\mu A_\nu\\
&= \delta^{jk}A_j\partial_kB_i + \delta^{jk}B_j\partial_kA_i + (\delta^\mu_i\delta^{\nu j} - \delta^\nu_i \delta^{\mu j})(A_j\partial_\mu B_\nu + B_j\partial_\mu A_\nu)\\
&= \delta^{jk}A_j\partial_kB_i + \delta^{jk}B_j\partial_kA_i + \delta^{jk}(A_j\partial_i B_k + B_j\partial_i A_k) - \delta^{jk}(A_j\partial_k B_i + B_j\partial_k A_i)\\
&= \delta^{jk}(A_j\partial_i B_k + B_j\partial_i A_k)\\
&= \partial_i(\delta^{jk} A_j B_k)\\
&= [\nabla(A \cdot B)]_i
\end{align*} |
H: Connectedness of the Given Set
How will I find out that $A=\{(x,y) \in\Bbb C^2:x^2+y^2=1\}$ is connected or not in $\Bbb C^2$?
Thanks for any help.
AI: Yes, we have the usual rational parametrization of the conic $C=\{-z_0^2+z_1^2+z_2^2=0\}\subset\mathbb CP^2$:
$$f\colon \mathbb CP^1\to\mathbb CP^2, \quad f([t_0,t_1]) = [t_0^2+t_1^2,2t_0t_1,t_0^2-t_1^2]\,.$$
The points $[1,\pm i]$ map to the two points at infinity, and so $A\subset C$ is parametrized by $\mathbb CP^1 - \{\text{two points}\} \cong \mathbb C-\{0\}$, which is connected.
In affine coordinates, we're mapping $t\in\mathbb C-\{\pm i\}$ to $\left(\frac{2t}{1+t^2},\frac{1-t^2}{1+t^2}\right)$, and the point $\infty\in\mathbb C\cup\{\infty\}$ maps to $(0,1)$. This map is a homeomorphism. |
H: Barbalat's lemma for Stability Analysis
Good day,
We have:
Lyapunov-Like Lemma: If a scaler function V(t, x) satisfies the
following conditions:
$V(t,x)$ is lower bounded
$\dot{V}(t,x)$ is negative semi-definite
$\dot{V}(t,x)$ is uniformly continuous in time
then $\dot{V}(t,x) \to 0$ as $t \to \infty $.
Now if we have the following system:
$\dot{e} = -e + \theta w(t) \\
\dot{\theta} = -e w(t)$
and assume that $w(t)$ is a bounded function, then we can select the following Lyapunov function:
$V(x,t) = e^2 + \theta^2$
Taking the time derivative:
$\dot{V}(x,t) = -2e^2 \leq 0$
Taking the time derivative again:
$\ddot{V}(x,t) = -4e(-e+\theta w)$
Now $\ddot{V}(x,t)$ satisfies condition (3) when $e$ and $\theta$ are bounded, but how can I be sure that these two variables are indeed bounded? Should I perform some other test, or...?
AI: Notice that you can decouple your system by writing the equations governing the "polar coordinates" $(e,\theta)=(R\cos\phi,R\sin\phi)$. In particular, (if my calculations are correct, and please comment if you find an error): \begin{align}\dot{\phi}= &\cos\phi \sin\phi -w ,\\\dot{R} =& -R \cos^2\phi . \end{align}
Since $V=R^2$, you are trying to show that $\dot{V}=2R\dot{R}\to 0$ as $t\to \infty$. For an initial value problem for $(e,\theta)$ with $(e(0),\theta(0))\neq (0,0)$, we can choose a corresponding initial value problem for $(R,\phi)$ with $R(0)>0$. In such a case, on the interval of existence of a solution $R$ will be a monotone decreasing positive function (although potentially not strictly decreasing!). If either $\dot{R}\to 0$ or $R\to 0$ as $t\to \infty$ then $R\dot{R} \to 0$, since in the first case we know $|R|$ is bounded (as any positive decreasing function must be), and in the second case we have the formula $|R\dot{R}|=|-R^2\cos^2\phi|\leq R^2$.
Finally, notice that for an equation like $\dot{y} = a(t) y$ with $a(t)\leq 0$ and $a$ continuous, either $a(t)\to 0$ as $t\to T$ for some $T\in [0,\infty]$ (note I include $\infty$ here) or else, $a(t)<-m<0$ for all $t$ and by Gromwall's inequality $y(t)<y(0) e^{-mt}$. In particular, in this situation (assuming positive initial value) either $\dot{y}\to 0$ as $t\to T$ for some $T\in[0,\infty]$ or $y\to 0$ as $t\to \infty$. Since $\phi$ is meant to be a solution some (generally speaking) integral equation, it and $-\cos^2\phi$ is continuous, and we have that $\dot{V}\to 0$ as $t\to \infty$ or as $t\to T$ if there exists a values $T$ such that $\cos^2\phi(T)=0$.
There are some cautionary statements however. It's generally possible (although I'm not sure about this particular system), that $\dot{R}=0$ in finite time (like in our case, if $w=0$ and $\phi=\pi/2$). Solving explicitly for $R$ you have $R(t)=R_0 \exp(-\int_{0}^{t} \cos^2\phi(\tau)d\tau)$, so it's obvious that $R$ won't reach zero in finite time (with $\cos^2 \phi(\tau)$ being a bounded function, afterall). Also, I haven't shown anything about existence, but I have assumed existence throughout.
EDIT: Let me sheepishly add that it's clear that $(e,\theta)$ are bounded since $V=e^2+\theta^2$ is decreasing in time. (So in particular, $e(t),\theta(t)\leq \sqrt{V(t)}\leq \sqrt{V(0)}$ for all $t\geq 0$). |
H: Principal ideals and UFD's
Problems 1-6 form a project designed to prove that if R is a UFD and every nonzero prime ideal of R is maixmal, then R is a PID.
Let I be an ideal of R, since {0} is principal, we can assume that $I \not= \{0\}$. Since R is a UFD, every nonzero element of I can be written as $up_1...p_t$ where u is a unit and the $p_i$ are irreducible, hence prime. Let r=r(I) be the minimum such t. We are going to prove by induction on r and I is principal.
1) If r=0, show that $I = \langle 1 \rangle = R$.
Answer
If r=0 then I contains a unit, so that $1 \in I$ and I=R.
I'm not sure if I get this, because I'm having trouble understanding the notation r=r(I). What is that? Does it mean that we multiply r by I? If so, then r=0 just gives us 0=0. So I'm sure that I'm misunderstading that notation...
Thank you in advance
AI: Here $\,r\,$ denotes the length of the shortest element in $\,I,\,$ i.e. the element having the least number of prime factors. If $\,r = 0\,$ then a nonzero element of $\,I\,$ has no prime factors, so it is a unit.
The proof employs following pretty generalization of the Euclidean algorithm to arbitrary PIDs. The Dedekind-Hasse criterion states that
a domain $\rm\:D\:$ is a PID iff given any $\rm\:0\ne b,c \in D,\:$
either $\rm\:b\:|\:c\:$ or there exists a $\rm D$-linear combination of $\rm\:b,c\:$ that's "smaller" than $\rm b,\:$ where size is measured by naturals (or any ordinal), so that induction (or descent) works.
It is clear that such a domain must be a PID, since
the smallest element in an ideal must divide all others.
Conversely, since a PID is UFD, an adequate metric is
the number of prime factors (since if $\rm\:b\nmid c\:$ then their gcd $\rm\:d\:$
must have fewer prime factors; for if $\rm\:(b,c) = (d)\:$ then
$\rm\:d\:|\:b\:$ properly, else $\rm\:b\:|\:d\:|\:c\:$ contra hypothesis). Notice Euclidean descent by the Division Algorithm is just a
special case, hence Euclidean $\Rightarrow$ PID ($\Rightarrow$ {UFD, Bezout} $\Rightarrow$ GCD). |
H: How does standard random variable have variance of 1?
Let X be a discrete random variable and define $Z = \cfrac{X - \mu_x}{\sigma_x}=\cfrac{1}{\sigma_x} \cdot X - \cfrac{\mu_x}{\sigma_x}$ which is a linear transformation of $X$.
How do you get a variance of 1 assuming this? I tried working it out but couldn't get it. I am stuck on the last step:
$\mu_z=E[Z] = \cfrac{\mu_x}{\sigma_x} - \cfrac{\mu_x}{\sigma_x} =0$
$Z^2 = \left(\cfrac{X-\mu_x}{\sigma_x}\right)^2= \cfrac{X^2}{\sigma_x^2} - \cfrac{-2 X \mu_x}{\sigma_x^2} + \cfrac{\mu_x^2}{\sigma_x^2} $
$E[Z^2] = \cfrac{1}{\sigma_x^2}(E[X^2] - 2 E[X] \mu_x + \mu_x^2)$
$E[Z^2] = \cfrac{1}{\sigma_x^2}(E[X^2] -E[X]^2)$
By defn:
$\sigma_z^2 = E[Z^2] - E[Z]^2=E[Z^2]=\cfrac{1}{\sigma_x^2}(E[X^2] -E[X]^2)$
I see $E[X]^2 = \sigma_x^2$ but how do you simplify $E[X^2]$? Thank you in advance.
AI: As $\sigma_x^2 = E[X^2] - E[X]^2$ by definition of $\sigma_x$, you have in your computation of $E[Z^2]$, that
$$ E[Z^2] = \frac 1{\sigma_x^2} \bigl(E[X^2] - E[X]^2\bigr) = 1 $$
So $\sigma_z^2 = E[Z^2] - E[Z]^2 = 1 - 0^2 = 1$. |
H: Maximal subrings of $\mathbb{Q}$
Consider the sets $$\mathbb{Z}_{(p)}= \left\{ \frac{a}{b} \in \mathbb{Q}\mathbin{\Large\mid} b \notin (p) \right\} $$ Are these all the maximal subrings of the rationals?
AI: Yes. It is not too difficult to show that every subring of $\bf Q$ is a localization of $\bf Z$ with respect to a set of primes (with no restrictions on the sets of primes).
Let $R$ be a subring of $\bf Q$. Let $S$ be the set of all primes $p$ such that $p^{-1}\in R$. Then $S^{-1}{\bf Z}\subseteq R$. Now pick an element $x/y\in R\setminus S^{-1}{\bf Z}$ with $x,y\in\bf Z$ (we assume by hypothesis for the moment this is possible). There must be some prime $q$ such that $q\mid y$, $q\nmid x$ and $q\not\in S$, and so (by multiplying out) we conclude $x/q\in R$. Pick $a,b\in\bf Z$ such that $ax+bq=1$. Then $q^{-1}=a(x/q)+b\in R$ hence we deduce $q\in S$, a contradiction. Hence $R=S^{-1}{\bf Z}$.
So subrings are in bijective inclusion-preserving correspondence with subsets of the set of rational primes, so $S^{-1}{\bf Z}$ is maximal in $\bf Q$ iff $S$ is maximal in $\cal P$ iff $S={\cal P}\setminus\{p\}$ for some $p\in\cal P$. |
H: How many ways to divide a population of n members into groups of i members.
Let's say I have a population of 180, to be divided in disjoint groups of 6. In how many different ways can I divide this population? A general formula would help! Thanks.
AI: There are $180!$ ways to line the people up, and we can then group them from positions $1-6$, $7-12$, and so on.
However, we are double counting in two ways. First, within each group of $6$ we are double counting by a factor of $6!$. Then, we can arrange the groups in $30!$ different ways. We thus have the following answer:
$$\frac{180!}{(6!)^{30}30!}$$
This is a large number by the way, on the order of $10^{211}$. In general, with $n$ people and $k$ people per group, where $n=km$, the formula is:
$$\frac{n!}{(k!)^mm!}$$ |
H: Vector Picking on the Unit Sphere
Imagine a vector from the center of a unit sphere to its surface:
Now imagine a second vector generated in indentical fashion. Given the first vector, how can I generate vectors to uniformally distribute the angle between them (θ).
My first thought was to use spherical coordinates -- however this generates a non-uniform distribution (as most points picked will be near the equatorial circumference, relative to the first vector):
Next I read this Wolfram Alpha article on sphere point picking. But that yields nearly identical results... the {X,Y,Z} endpoint is now uniformally distributed, but the angle (θ) between the two unit vectors is not.
The closest I've come is to pick the end point of the second vector on a unit circle which I place in plane centered on the sphere's center point. Then I take the point and rotate about the original vector by a random amount using the equation for rotation about a line in arbitrary space (such that the unit sphere's center can be place in arbitrary space).
This gives this distribution:
Which is relatively flat on [30,150], but spikes near the peaks.
Any ideas on how to pick the second vector so as to give a uniform angular distribution?
AI: In spherical coordinates, the surface area of a sphere is proportional to (1 minus the cosine of the polar angle), as measured from a chosen "pole" on the sphere. So perhaps after choosing your first vector, the second vector would be chosen so that $ \ -1 \ \le \cos \theta \le +1 \ $ is uniformly distributed for your second vector, $ \ \theta \ $ being the plane angle between them, and the "azimuthal angle" $ \ 0 \le \phi < 2 \pi\ $ for the direction of the second vector being uniformly distributed. |
H: derivative of fourier transform
Let $f\in C^k$ and $f^{(k)}$ be absolutely integrable. I want to show for the fourier transform:
$$\widehat{f^{(k)}}(z)=(iz)^k\widehat{f}(z)$$
I want to prove it for $k=1$ and did the following:
partial integration: $$\begin{align*}
\int_a^bf'(w)\exp(-iwz)dw&=f(b)\exp(-ibz)-f(a)\exp(-iaz)+iz\int_a^b f(w)\exp(-iwz)dw
\end{align*}$$
The last term for $a,b\rightarrow\infty$ is the fourier transform of $f(z)$.
But why are $\lim_{b\rightarrow\infty}f(b)=0$ and the same for $a$?
AI: The reason is that:
If $|f|$ and $|f'|$ are both integrable, then $f$ must vanish at infinity.
Notice your question pre-suppose that the Fourier transform for $f$ and $f'$ exist, hence they are both integrable. Now
$$
\int^{\infty}_0 f'(w) \,dw = \lim_{b\to \infty}f(b) - f(0).
$$
By integrability of $f'$, the limit exists. Moreover by integrability of $f$, the only possible limit is $0$. The same argument applies for $-\infty$. |
H: Find $\lim_{x\to-\infty}{x+e^{-x}}$
I have this exercise in my worksheet:
$$\lim_{x\to-\infty}{x+e^{-x}}$$
I am always ending up with $-∞+∞$ or $\frac{∞}{∞}$. It says the answer is $+∞$, but how can I get that?
AI: Negative numbers make me nervous, so let $t=-x$. We want
$$\lim_{t\to \infty} (e^t-t).$$
The answer is obvious, $e^t$ is much larger than $t$ if $t$ is large. If you want to be formal, after a (short) while $t\lt \frac{e^t}{2}$, so after a short while $e^t-t\gt \frac{1}{2}e^t$. |
H: Finite group acting freely on Haussdorf space- Topology problem
How to prove the following problem:
It is given Hausdorff space $X$ and finite group $G$ (with neutral $e$) that is acting freely on $X$. For $g\in G$, $\overline{g}:X\rightarrow X$ is homeomorphism.
a) Prove that for every $x\in X$ exists an open neighborhood $U$ such that for every $g\in G$, $g\neq e,$ the property $U\cap \overline{g}(U)=\emptyset$ holds.
b) Prove that for each neighbourhood $U$ from part a) we have $\overline{g_1}(U)\cap\overline{g_2}(U)=\emptyset$ for every two distinct elements $g_1, g_2\in G$.
c) Prove that $\pi : X\to X/G$ is covering map.
Any kind of help is welcome.
AI: Let $x \in X$, $g\in G - \{e\}$. As $\bar gx \ne x$, by Hausdorffness there are open neighbourhoods $U_g$ of $x$, $V_g$ of $\bar gx$ such that $U_g \cap V_g \ne \emptyset$. Let $U = \bigcap_{h \in G-\{e\}} (U_h \cap \bar h^{-1}V_h)$. Then $U$ is an open neighbourhood of $x$ and for $g \in G$ we have
$$ \bar g U \cap U \subseteq \bar g\bar g^{-1}V_g \cap U_g = \emptyset. $$
For b) note that
$$ \bar g_1 U \cap \bar g_2 U = \bar g_1 (U \cap \overline{g_1^{-1}g_2}U) = \bar g_1\emptyset = \emptyset. $$
For c), let $\pi(x) \in X/G$, let $U$ as in b). Then we have that $\pi(U)$ is open in $U$ as $\pi^{-1}\pi U = \bigcup_{g\in G} \bar g U$ is open, hence an open neighbourhood of $\pi(x)$. Moreover, restricted to each $\bar g U$, $\pi$ is a homeomorphism onto $[U]$, therefore, as the $\bar g U$ are disjoint, $\pi$ is a covering map. |
H: In how many ways can you split a string of length n such that every substring has length at least m?
Suppose you have a string of length 7 (abcdefg) and you want to split this string in substrings of length at least 2.
The full enumation of the possibilities is the following:
ab/cd/efg
ab/cde/fg
abc/de/fg
abc/defg
abcd/efg
abcde/fg
abcdefg
And perhaps some others.
In general, if we have a string of length n and we want to split the string in substrings of length at least m, in how many ways can we achieve this?
AI: Let $f(n,m)$ denote this number. Clearly, $f(n,m)=0$ if $n<m$ with the exception that $f(0,m)=1$ for all $m$.
For $n>m$ we have
$$ f(n,m)=\sum_{k=m}^nf(n-k,m)$$
It may be simlre when counting by number of substrings: To split into $r$ substrings is the same as to split $n-rm$ into $r$ nonnegative summands, which is possible in exactly $n-rm+r\choose r-1$ ways ("stars and bars"). Thus we find
$$ f(n,m)=\sum_{r=1}^{\infty}{n-r(m-1)\choose r-1}\qquad\text{if }n\ge m>0.$$ |
H: CDFs of generalize beta distribution pdf and standard beta pdf.
Let $f(x)$ be the probability density function (pdf) of the standard beta distribution on $(0,1)$. And let $f_d(x)$ be the pdf of the generalized beta distribution on $(0,d)$. I know that,
$$f_d(x) = d \cdot f(\frac{x}{d})$$
The cumulative distribution function (cdf) of the generalized beta distribution is,
$$F_d(x) = \int^x_{u=0} f_d(u) \, du.$$
I am tempted to replace $f_d(u)$ by $\left( d \cdot f(\frac{x}{d}) \right)$ in the integral to get,
$$F_d(x) = \int^x_{u=0} df(\frac{u}{d}) \, du = d \cdot \int^x_{u=0} f(\frac{u}{d}) \, du = d \cdot F(\frac{x}{d}).$$
Where $F$ is the cdf of the standard beta distribution. This is wrong because taking $x = d$ gives me $F_d(d) = d$ since $F(1) = 1$.
Question: Where have I gone wrong? Why I can't I replace $f_d(u)$ by $\left( d \cdot f(\frac{x}{d}) \right)$ in the integral?
AI: OP wrote:
Let $f(x)$ be the probability density function (pdf) of the standard beta distribution on $(0,1)$. And let $f_d(x)$ be the pdf of the generalized beta distribution on $(0,d)$. I know that,
$$f_d(x) = d \cdot f(\frac{x}{d})$$
Scaled Beta
If $X$~$Beta(a,b)$ (a standard Beta on (0,1)), then the pdf of the scaled Beta ... $Y = u(X) = c*X$ will be:
$$f_c(y) = \frac{1}{c} \cdot f(\frac{y}{c}) ..... \text{not .... } c \cdot f(\frac{y}{c})$$
This is because, by the method of transformations, the pdf of $Y=u(X)$ is:
$$ |J| f( u^{-1}(y)) $$
where $x=u^{-1}(y)$ is the inverse of the transformation equation $Y = c X$, and $J=\frac{d u^{-1}(y)}{dy}$ denotes the Jacobian of the transformation . You have calculated the Jacobian of the transformation incorrectly.
P.S. What is described here is more correctly termed a scaled Beta, not a generalised Beta (which has generalised upper and lower bounds).
Addendum
In response to the comments below:
Let $X$ ~ standard Beta with pdf $f(x)$:
f = (x^(a-1) (1-x)^(b-1))/Beta[a, b];
domain[f] = {x, 0, 1} && {a > 0, b > 0};
... then, using Mathematica here, you wish to confirm:
Integrate[(1/c) f /. x -> y/c, {y, 0, c}, Assumptions -> {a > 0, b > 0, c > 0}]
1 |
H: A problem from <> by Roytvarf, Birkhauser
I got a problem, which turned to be from the book "Thinking in Problems How Mathematicians Find Creative Solutions" by Roytvarf, Chapter One, Jacobi Identities and Related Combinatorial Formulas :
The problem is asking to prove that
$$
\sum_{i=1}^{n}
\frac{x_i^m}{\prod_{j\neq i}(x_i-x_j)}=0 \ \text{ for }\ m < n-1.
$$
The author gave hint that (I will copy the original words because I have difficulties in following it): "Analytic method: using implicit function theorem : The left-hand sides are proportional to the first derivatives of the functions $\sum_{i=1}^n x_i^{m+1}$, with respect to the constant term of the polynomial $F(z):={\prod_i (z-x_{i})}$, while the other coefficients are kept fixed "
What does it mean ? The constant term of $\prod_i (z-x_i)$ is $\prod_i x_i$, so I try to take the partial derivative as
$$
\frac{\partial \sum_i x_i^{m+1}}{\partial \prod_i x_i}
=\sum_j \left( \frac{\partial \sum_i x_i^{m+1}}{\partial x_j}
\frac{\partial x_j}{\partial \prod_i x_i} \right)
=\sum_j \left( \frac{\partial(m+1)x_j^m}{(\prod_i x_i)/x_j} \right)
=\sum_j \left( \frac{(m+1)x_j^{m+1}}{(\prod_i x_i)} \right).
$$
However it doesn't seem to contain the factor of left handside of the equation. Also I am confused about the meaning that "while the other coefficients are
kept fixed"
I think most of all I just don't see how implicit function theorem is related to the hint here.
Can someone help me to understand? Thanks a lot!
AI: You consider the map $c\colon (x_1,\ldots,x_n)\mapsto (a_0,\ldots,a_{n-1})$ where $x^n+a_{n-1}x^{n-1}+\ldots+a_0=(x-x_1)\cdots(x-x_n)$.
For any point $\mathbf x=(x_1,\ldots,x_n)$ with all entries different(!), you can find an inverse map $c^{-1}$ from a neighbourhood of $c(\mathbf x)$ to a neighbourhood of $\mathbf x$. Of course you can combine this with $s_m\colon(x_1,\ldots,x_n)\mapsto \sum_i x_i^{m+1}$ and then consider $\frac \partial {\partial a_0}(s_m\circ c^{-1})$ |
H: Fourier transform of periodic signal
I have a question that is similar to this one but slightly different.
If I have discrete signal $$s(t) = \sum_k n_k \delta(t-kT_0),\quad k=0,1,\dotsc,$$ where $n_k$ are just some scalar numbers. What is the Fourier transform of $s(t)$? I think it should be some kind of a convolution $$S(f) = G(f)\star\sum_m\delta(f-m/T_0),\quad m=0,1,\dotsc,$$ but what is $G(f)$? Is there an analytical expression for it in terms of $n_k$?
AI: The Fourier transform of $\delta(t)$ is $1$, so the FT of $\delta(t-a)$ is $e^{-ia\omega}$. By linearity,
$$S(\omega) = \sum_k n_k e^{-ikT_0\omega}.$$
Unless $n_k$ have a simple dependence on $k$, there is probably no closed formula for the sum.
Note: I'm assuming $\hat f(\omega) = \int_{-\infty}^\infty e^{-i\omega t}f(t)\,dt$. Other scalings of the Fourier transform are also common. |
H: Does Simpsons rule still apply when a < 0?
I am currently working on an assignment where I have to find the answer to the following integral using Simpsons rule:$\int x+1$ (MIN = -1 MAX = 3), I choose to have 6 intervals. I now start calculating:
$delta X = \frac{b-a}{n} = \frac{3--1}{6} = \frac{4}{6} = 0.6666666~$
$$\frac{\frac{6}{4}}{3} (=>\frac{2}{9})*(f(0*\frac{4}{6} + 4* f(1*\frac{4}{6})+ 2* f(2*\frac{4}{6})+ 4* f(3*\frac{4}{6})+ 2* f(4*\frac{4}{6})+ 4* f(5*\frac{4}{6}) + f(6*\frac{4}{6})) = \frac{34}{3} = 11.3333$$
(The actual calculation was done by wolfram alpha)It should be 8. I wrote this into a Java funtion:
public class Simpsons_rule {
public void start(Formula f, int n) throws Exception{
if (n%2 != 0){
throw new Exception("n must be even.");
}
double deltaX = (f.getB() - f.getA())/n;
double endanswer = 0.0;
for (double interval = 1.0; interval < n ; ++interval){
if (interval%2 != 0){
endanswer += 4*(f.calculate(interval * deltaX));
} else {
endanswer += 2*(f.calculate(interval * deltaX));
}
}
endanswer += f.calculate(0); //or 0*deltaX, whichever you prefer
endanswer += f.calculate(n*deltaX));
endanswer *= deltaX/3;
System.out.println(endanswer);
}
}
The class that gets used:
public class Som3 implements Formula {
@Override
public void print() {
System.out.println("Som 3\n x+1 dx (MIN = -1 MAX = 3");
}
@Override
public double calculate(double d) {
return d+1;
}
@Override
public double getA() {
return -1;
}
@Override
public double getB() {
return 3;
}
@Override
public void printCorrectAnswer() {
System.out.println("The correct answer: 8 (Calculated by Wolfram Alpha)");
}
}
This function works perfect for any intergral that I push into it, except for this one. The only difference between these intergrals is that a is below 0 here and above or equal to zero everywhere else. Is it true that Simpsons rule only applies when a>=0? Or Am I doing something wrong?
Edit: My understanding of Simpsons rule was incorrect, here is the correct version of class Simpsons_Rule:
public class Simpsons_rule {
public void start(Formula f, int n) throws Exception{
if (n%2 != 0){
throw new Exception("n must be even.");
}
double deltaX = (f.getB() - f.getA())/n;
double endanswer = 0.0;
for (double interval = 1.0; interval < n ; ++interval){
if (interval%2 != 0){
endanswer += 4*(f.calculate(interval * deltaX + f.getA()));
} else {
endanswer += 2*(f.calculate(interval * deltaX + f.getA()));
}
}
endanswer += f.calculate(f.getA()); //or 0*deltaX, whichever you prefer
endanswer += f.calculate(f.getB());
endanswer *= deltaX/3;
System.out.println(endanswer);
}
}
AI: I would not use a double for interval, but regarding your question: $a<0$ is allowed, all you should require is $b>a$.
However, why do you call f.calculate(0) and not f.calculate(f.getA())? Or maybe you should call f.calculate(f.getB()-f.getA()) instead of f.calculate(f.getB())? It is hard to tell without knowing what calculate does. At any rate, I suspect that your method fails for many functions and intervals. |
H: Why is this true? $(\exists x)(P(x) \Rightarrow (\forall y) P(y))$
Why is this true?
$\exists x\,\big(P(x) \Rightarrow \forall y\:P(y)\big)$
AI: Since this may be homework, I do not want to provide the full formal proof, but I will share the informal justification. Classical first-order logic typically makes the assumption of existential import (i.e., that the domain of discourse is non-empty). In classical logic, the principle of excluded middle holds, i.e., that for any $\phi$, either $\phi$ or $\lnot\phi$ holds. Since I first encountered this kind of sentence where $P(x)$ was interpreted as "$x$ is a bird," I will use that in the following argument. Finally, recall that a material conditional $\phi \to \psi$ is true if and only if either $\phi$ is false or $\psi$ is true.
By excluded middle, it is either true that everything is a bird, or that not everything is a bird. Let us consider these cases:
If everything is a bird, then pick an arbitrary individual $x$, and note that the conditional “if $x$ is a bird, then everything is a bird,” is true, since the consequent is true. Therefore, if everything is a bird, then there is something such that if it is a bird, then everything is a bird.
If it is not the case that everything is a bird, then there must be some $x$ which is not a bird. Then consider the conditional “if $x$ is a bird, then everything is a bird.” It is true because its antecedent, “$x$ is a bird,” is false. Therefore, if it is not the case that everything is a bird, then there is something (a non-bird, in fact) such that if it is a bird, then everything is a bird.
Since it holds in each of the exhaustive cases that there is something such that if it is a bird, then everything is a bird, we conclude that there is, in fact, something such that if it is a bird, then everything is a bird.
Alternatives
Since questions about the domain came up in the comments, it seems worthwhile to consider the three preconditions to this argument: existential import (the domain is non-empty); excluded middle ($\phi \lor \lnot\phi$); and the material conditional ($(\phi \to \psi) \equiv (\lnot\phi \lor \psi)$). Each of these can be changed in a way that can affect the argument. This might not be the place to examine how each of these affects the argument, but we can at least give pointers to resources about the alternatives.
Existential import asserts that the universe of discourse is non-empty. Free logics relax this constraint. If the universe of discourse were empty, it would seem that $\exists x.(P(x) \to \forall y.P(y))$ should be vacuously false.
Intuitionistic logics do not presume the excluded middle, in general. The argument above started with a claim of the form “either $\phi$ or $\lnot\phi$.”
There are plenty of philosophical difficulties with the material conditional, especially as used to represent “if … then …” sentences in natural language. If we took the conditional to be a counterfactual, for instance, and so were considering the sentence “there is something such that if it were a bird (even if it is not actually a bird), then everything would be a bird,” it seems like it should no longer be provable. |
H: Factoring Quadratic Trinomials
I'm currently doing some homework, but I'm COMPLETELY stuck on one problem. I need to factor the following trinomial:
$$5x^2+7xy+2y^2$$
How can I solve this problem? I have no idea what to do because of the different variables.
AI: Given:
$\boxed{5x^2+7xy+2y^2}$
We can use a method called grouping to factor this equation.
Start by multiplying $5x^2\cdot2y^2$ which equals $10x^2y^2.$
Now look for two numbers which multiply to $10x^2y^2$ and add to $7xy$.
In this case, $2xy$ & $5xy$ are the two numbers which we want because $2xy\cdot5xy=10x^2y^2$ and $2xy+5xy=7xy$.
We can now split the $7xy$ in the original expression:
$$(5x^2+5xy)+(2xy+2y^2)$$
We will now factor out any common factors.
$$=5x\left(\frac{5x^2+5xy}{5x}\right)+2y\left(\frac{2xy+2y^2}{2y}\right)\\=5x(x+y)+2y(x+y)\\=(5x+2y)(x+y)$$
The final factorization of ${5x^2+7xy+2y^2}$:
$$(5x+2y)(x+y)$$ |
H: How do I solve the weighted normal equations?
I am trying to solve the normal equations for a 3D LSE of a general quadric:
$$ z = ax^2 + bx + cxy + dy^2 + ey + f$$
Write as a vector equation:
$$ \vec{z}= \bf{X}\vec{\beta}$$
where the 'ith row of X is:
$$ Xi = [x^2, x, xy, y^2, y, 1]$$
and
$$ \vec{\beta} =[a, b, c, d, e, f]^T$$
In addition I want to weight the points with the following function:
$$ w(x,y)=(1-(x^2+y^2)^{1.5}/r^3)^3 $$
if
$$ (x^2+y^2) < r^2 $$
0 otherwise (for some user specified distance r).
According to wikipedia: the normal equations with weights should be:
$$ ({\bf{X}}^T \bf{W} \bf{X}) \vec{\beta}= ({\bf{X}}^T \bf{W}) \vec{z} $$
where W is a diagonal matrix with the n w values along the diagonal.
How do I solve this equation in a fast and robust manner?
The wikipage e.g. derives the SVD solution for the non weighted case,
but how is this for the weighted case?
AI: Let me begin by saying that the normal equations are an inefficient method of solving a least squares problem.
Mathematically, the problem which you are trying to solve using the normal equations for $\vec{\beta}$ with no weighting is equivalent to
$$
\vec{\beta}=\operatorname{arg min}\limits_b\;\left\|X\vec{b}-\vec{z}\right\|_2^2.
$$
Expanding this norm under the usual inner product $\langle x,y\rangle=x^Ty$ gives
$$
\left\|X\vec{b}-\vec{z}\right\|_2^2=\vec{b}^TX^TX\vec{b}-2\vec{b}^TX^T\vec{z}+\vec{z}^T\vec{z}.
$$
This is a quadratic form in $\vec{b}$, which can be solved by differentiating this expression wrt $\vec{b}$ and solving the resulting system of linear equations:
$$
X^TX\vec{b}=X^T\vec{z},
$$
which is the derivation of the normal equations.
If the inner product were a non-standard inner product, such as $\langle x,y\rangle_W=x^TWy$, then following the same derivation, you obtain the weighted normal equations:
$$
X^TWX\vec{b}=X^TW\vec{z}.
$$
My reason for this description is that the description of the effect of the non-standard inner product. Following back through the steps, it should become plain that this is equivalent to the following problem, using the standard inner product:
$$
\vec{\beta}=\operatorname{arg min}\limits_b\;\left\|W^{1/2}\left(X\vec{b}-\vec{z}\right)\right\|_2^2.
$$
Now, the QR decomposition can be used to efficiently solve the problem $\min\limits_x\|A\vec{x}-\vec{d}\|$, by factoring $A=QR$, where $Q$ is an orthonormal matrix ($Q^TQ=I$) and $R$ is upper triangular. The linear system becomes
$QR\vec{x}=\vec{d},$
which upon left multiplying by $Q^T$ gives
$$R\vec{x}=Q^T\vec{d},$$
since $Q^TQ=I$. The solution of this linear system $\vec{x}$ is guaranteed to be the solution which minimises $\|A\vec{x}-\vec{d}\|$.
The relationship of this to your problem is $A=W^{1/2}X$ and $\vec{d}=W^{1/2}\vec{z}$. This is a much more efficient solution method than the normal equations. It is also numerically superior to the normal equations. |
H: Central Limit Theorem Problem
Let $U_i, i=1,...,300$ be iid r.v's from the uniform distribution on $[-\frac{1}{2},\frac{1}{2}]$. Calculate, using the CLT, $$P(\sum_{i=1}^{300}U_i \le 3).$$
My solution:
By the CLT, for large enough $n$ the random variable $$Z=\frac{\sum_{i=n}^{300}U_i - n \mu}{\sigma \sqrt{n}}$$
is distributed $\sim N(0,1)$, where $\mu=E[U_i], \sigma=\sqrt{V[U_i]}$, for any $1\le i \le300$.
So, $$P(\sum_{i=1}^{300}U_i \le 3)=P({\sum_{i=1}^{300}U_i -300\mu \over \sqrt{300} \sigma} \le {3 - 300\mu \over \sqrt{300} \sigma})=...$$
calculating the moments $\mu=0, \sigma=\sqrt{\frac{1}{12}}$, we obtain
$$...=P(Z\le 0.6)=0.7257$$
I used this table.
AI: What you have written seems sensible. When I tried the following code in R
sampsize <- 300
cases <- 100000
set.seed(1)
matdat <- matrix(runif(sampsize*cases, min=-1/2, max=1/2), ncol=sampsize)
mean(rowSums(matdat) <= 3)
I got the result [1] 0.72506 which is close enough to your result to suggest your answer is probably correct. |
H: Prove the converse of the Law of Sines
If $\alpha,\beta,\gamma,a,b,c \in \mathbb{R}^+$, $\alpha+\beta+\gamma=180^\circ$, and
$$
\frac{\sin(\alpha)}{a} = \frac{\sin(\beta)}{b} = \frac{\sin(\gamma)}{c} \qquad \text{ (1)}$$
then there exists a triangle in 2-space with angle-side pairs $(\alpha,a),(\beta,b),(\gamma,c)$.
I know the Law of Sines says if we have a triangle, then $(1)$ is satisfied. I know the converse statement must be true, it seems like it would be. I am having trouble of how would I show it is true if it is?
AI: Recall that for any $\alpha,\beta,\gamma>0$ (measured in degrees) with $\alpha+\beta+\gamma=180$ there is a triangle with angles $\alpha,\beta,$ and $\gamma$.
By the law of sines the side lengths satisfy $\sin\alpha/s_1 = \sin\beta/s_2 = \sin\gamma/s_3$. Now multiply the side lengths by a factor $t$ so that $s_1t = a$. We claim that $s_2t = b$ and $s_3t = c$. Indeed,
$$
s_2t = \frac{s_2}{s_1} s_1t = \frac{\sin \beta}{\sin \alpha} s_1 t = \frac{\sin \beta}{\sin \alpha} a
$$
$$
= \frac{a}{\sin \alpha} \sin \beta = \frac{b}{\sin \beta} \sin \beta = b.
$$
The result for $s_3t=c$ is similar. Thus the claim is proved. |
H: Finding two eigenvalues which add to $1$
$\textbf{Question}$: For $0<t<\pi$, the matrix
$$
\left( \begin{array}{cc}
\cos t & -\sin t \\
\sin t & \cos t \\
\end{array}
\right)
$$
has distinct complex eigenvalues $\lambda_1$ and $\lambda_2$. For what value of $t$, where $0< t< \pi$, is $\lambda_1+\lambda_2=1$?
$\textbf{My work}$: I guessed and checked when $t=\pi/2$ (with the two eigenvalues being $\pm i$) and when $t=\pi/4$ (with the sum of the eigenvalues being $\sqrt{2}$).
How does one work through this problem? Is there any geometric insight in finding a unique $t$ which satisfies the above condition?
AI: The sum of the eigenvalues is the trace. So we need $2\cos t=1\implies \cos t=\frac{1}{2}.$ $\frac{\pi}{3}$ is a solution. |
H: number of vertices in a self-complemntary graph
Problem: Prove that the number of vertices of a self-complementary graph must be congruent to 0 or 1 modulo 4.
I think my starting point would be that P4 and C5 are self-complementary and proceed by induction by adding P4s.
I established the first step in a previous problem, but I need help on the induction step.
i.e. proving that adjoining P4 will indeed yield a self-complementary graph.
AI: Hint: If a graph with $n$ vertices is self-complementary, then the complete graph on $n$ vertices has an even number of edges.
Hence $n \equiv 0, 1 \pmod{4} $. |
H: Exercise over metric spaces
Let $A$ a closed subset of a metric space $E$ and let $x\in E-A$.
¿Is posible get two disjoint open sets U, V such that $A\subseteq U$ and $x\in V$?
If $A$ is a compact set I know if it is possible to demonstrate the exercise, but if $A$ If $ A $ is a compact set I know if it is possible to demonstrate the exercise, but if A is closed just not sure.
AI: $E - A$ is open, so $x$ is an interior point of $E-A$. So there is an open neighborhood $N_r(x)$ inside $E-A$ with radius $r > 0$. But instead of taking all of $N_r(x)$, just set $V = N_{r/2}(x)$.
Now, for every point $y \in A$, associate a neighborhood $N_{r/2}(y)$ of $y$. Take the union of these open sets for every point in $A$, so:
$$U = \bigcup_{y \in A} N_{r/2}(y)$$
Since this is a union of open sets, $U$ is open. Moreover, if $z \in U \cap V$, then $z$ is (less than) $r/2$ distance from some point of $A$ and $r/2$ distance from $x$. Now apply the triangle inequality and we have a contradiction. |
H: Finding the constant term of a degree $3$ polynomial
Let $p(x)=x^3+ax^2+bx+c$, where $a,b,c$ are real constants. If $p(-3)=p(2)=0$ and $p'(-3)<0$, which of the following is a possible value of $c$?
A) $-27$
B) $-18$
C) $-6$
D) $-3$
E) $-\dfrac{1}{2}$
$\textbf{My attempt at this problem}$:
I drew a rough sketch of the curve on the $xy$-plane. A portion of the degree $3$ curve has a local max to the left of $-3$ and a local min between $-3$ and $2$. When $x> 2$, $p$ is a strictly increasing function, no longer intersecting the $x$-axis.
Since $p(-3)=p(2)=0$, we plug in these integers to get
$$
p(-3)=(-3)^3 + a(-3)^2+b(-3)+c=0, \\
p(2) =(2)^3 + a(2)^2+b(2)+c=0.
$$
There is another root when $x<-3$.
Is there a trick to solve this problem without so much calculation?
AI: As you say, there are three roots, one at $-3$, one at $2$, and one less than $-3$. Call the last one $r$. Then the polynomial factors as $(x+3)(x-2)(x-r)$ with $r \lt -3$ Now if you multiply out the constant term.... |
H: How i can determine: $\lim_{x\to1} \frac{x^{\frac{1}{3}}-1}{x^{\frac{1}{4}}-1}$?
This is actually a limit tending to 1, if you can help me see how are the steps to multiply the factors, because it seems that there are many multiplications and this confuses me a lot! $$\lim_{x\to1} \frac{x^{\frac{1}{3}}-1}{x^{\frac{1}{4}}-1}$$
Without L'Hôpital's rule and derivation.
Any hint or way about how to do it?
AI: Set $y=x^{1/12}$. Then $$\frac{x^{1/3}-1}{x^{1/4}-1}=\frac{y^{4}-1}{y^{3}-1}=
\frac{y^{3}+y^{2}+y+1}{y^{2}+y+1}.$$
$$\lim_{y\rightarrow 1}\frac{y^{3}+y^{2}+y+1}{y^{2}+y+1}=\frac{4}{3}.$$ |
H: How to prove that $\Delta f = 0$ for $f(x)=\frac{1}{||x||}$?
I am given a function $f:\mathbb{R}^3\rightarrow\mathbb{R}:f(x)=\frac{1}{||x||}$, and I am not really sure that the norm must be Euclidian (anyway it wasn't mentioned in the task), and I have to prove that $\Delta f = 0$ (Laplace operator: $\Delta f(x_1,..,x_n) = \sum_{i=1}^n \frac{\partial^2f}{\partial x_i ^2}$). But even if I assume that the norm is Euclidian, I still don't get the right result:
$$\frac{\partial ^2f}{\partial x_i^2} = -\frac{x_i^2}{(x_1^2+x_2^2+x_3^2)^{\frac{3}{2}}} + \frac{1}{(x_1^2+x_2^2+x_3^2)^{\frac{1}{2}}}$$
So:
$$\Delta f = \frac{2}{(x_1^2+x_2^2+x_3^2)^{\frac{1}{2}}}$$
So something is wrong in my calculations. Could you help me?
Thanks in advance!
AI: Do you get the same results? If no, try to repeat your calculations once more.
$$\frac{\partial ^2}{\partial x_i^2}\frac{1}{(x_1^2+x_2^2+x_3^2)^{1/2}} = \frac{2x_i^2-x_j^2-x_k^2}{(x_1^2+x_2^2+x_3^2)^{5/2}} $$
After getting all three expressions like this, you will be able to see that sum of numerators is equal to zero. |
H: Vector Parametrization of intersection of a plane and an elliptical cylinder
Plane: $x+y+z= 1$
Elliptical cylinder: $(y/3)^2 + (z/8)^2 =1$
Find the parametrization in which they intersect.
AI: Hint: A point $(x,y,z) \in \mathbb R^3$ is on the zylinder, iff for one $\theta \in [0,2\pi]$ $y = 3\cos\theta$, $z = 8\sin\theta$. Now use the equation of the plane to derive the associated value of $x$. |
H: proof of a tree with two vertices of degree three
This is a practice question from the text.
The Question : Show that a tree with two vertices of degree $3$ must have at least four vertices of degree $1$. I have the answer to PART A.
Part B) Show that the result of Part (a) is the best possible i.e. a tree with two vertices of degree $3$ need not have five vertices of degree $1$.
I would really appreciate a hint that could help me solve part B.
AI: For Part B, consider this tree: >-< |
H: Finding the root of a degree $5$ polynomial
$\textbf{Question}$: which of the following $\textbf{cannot}$ be a root of a polynomial in $x$ of the form $9x^5+ax^3+b$, where $a$ and $b$ are integers?
A) $-9$
B) $-5$
C) $\dfrac{1}{4}$
D) $\dfrac{1}{3}$
E) $9$
I thought about this question for a bit now and can anyone provide any hints because I have no clue how to begin to eliminate the choices?
Thank you very much in advance.
AI: Hint:
If a reduced rational number $\,\frac rs\,$ is a root of an integer polynomial $\,a_0+a_1x+...+a_nx^n\,$ , then
$$r\mid a_0\;,\;\;s\mid a_n$$
The above is called the Rational Root Theorem, sometimes |
H: Celsius to Fahrenheit back and forth conversion with rounding.
Recently I've encountered some problem with conversion Celsius and Fahrenheit scales.
Let's assume that I have value of 44 degrees in Fahrenheit scale, I convert this to the Celsius which gives me 6,6667 degrees in Celsius. I'm rounding it to the 7 degrees. Then I'm incrementing value from 44 to the 45 degrees and converting it to the Celsius once more time, it gives me 7,2222. I'm rounding this once again and I receive 7 Celsius degrees as before.
My question is there any algorithm to avoid ambiguous rounding results for integers? Maybe different method of rounding can be applied ? Maybe some additional scale can be applied ? If there is no such a algorithm, what is minimum precision in the Celsius scale to provide not ambiguous conversion?
AI: As $1{}^\circ {\rm F}$ in difference of two temperatures corresdonds to $\frac 59{}^\circ {\rm C}$, ambiguity in rounding to integer cannot be avoided: For the interval $32{}^\circ {\rm F}$ - $41{}^\circ{\rm F}$ is mapped to $0{}^\circ$ - $5{}^\circ{\rm C}$. You can avoid ambiguity if you round for example to $0.5{}^\circ{\rm C}$, as $\frac 12 < \frac 59$. |
H: Decreasing sequence in a normed space
Consider by $p\geq 1$ the set $l^p=\{(x_n):x_n\in\mathbb{R},\,\,\sum |x_n|^p<\infty\}$. If defined by $x\in l^1$ $$||x||_p=\left(\sum_{n=1}^{\infty} |x_n|^p\right)^{1/p}$$ How to prove that the sequence $(||x||_p)_{p\geq 1}$ is decreasing?
AI: Suppose $x\in \ell^p$ with $\|x\|_p = 1$, $q > p$. Then we have $|x_n| \le 1$ for all $n$, that is $|x_n|^q \le |x_n|^p$, hence
$$ \|x\|_q^q = \sum_{n=1}^\infty |x_n|^q \le \sum_{n=1}^\infty |x_n|^p = 1 $$
This gives $\|x\|_q \le 1$.
If $x \in \ell^p$ is arbitrary, let $x' = x/\|x\|_p$. We have $\|x'\|_p = 1$, by the above $\|x'\|_q = \|x/\|x\|_p\|_q \le 1$, therefore $\|x\|_q \le \|x\|_p$. So $p\mapsto \|x\|_p$ is decreasing. |
H: Smooth function which is not continuous
I have seen it mentioned that in certain infinite dimensional topological vector spaces it is possible to have a smooth curve which is not continuous, but I've never seen an explicit example. Can anybody point me towards a reference for this?
AI: The mentioner might have been thinking of discontinuous linear maps on infinite-dimensional topological vector spaces. You can read about the construction of such functions at Wikipedia. The construction relies on the axiom of choice.
I am not an expert on notions of differentiability in infinite-dimensional spaces, but any linear function is well-approximated by a linear function (itself!). So I believe that a discontinuous linear function has a Fréchet derivative.
Edit: Christopher Wong explains below that the function will be Gateaux differentiable but not Fréchet differentiable. |
H: Finding all possibilities as the fourth vertex of a parallogram
Consider the points $A=(-1,2), B=(6,4)$, and $C=(1,-20)$ in the plane.
For how many different points $D$ in the plane are $A,B,C,D$ the vertices of a parallelogram?
A) none
B) one
C) two
D) three
E) four
$\textbf{My attempt}$: One parallelogram can be seen by putting the fourth vertex down and right of vertex $B$, i.e., this is the point located at $B+\vec{AC}$, where $\vec{AC}=C-A$.
Another parallelogram can be seen by putting the fourth vertex up and to the left of vertex $B$, i.e., this is the point located at $B-\vec{AC}$.
So I guessed C), which is two but the answer is supposed to be D), which is three. How do you see the third parallelogram?
AI: One: with $\,AD||BC\;$ , two with $\,AC||BD\,$ , third with $\,AD||CB\,$ . Please note the order of my vertices is not random: looking at the points on the plane, $\,D\,$ can be either up right from $\,A\,$ but also down left from $\,A\,$, whereas for the other possibility we have $\,D\,$ right down from $\,C\,$. Draw it. |
H: Open neighborhoods of a $G_\delta$ set
This may have a simple answer, but I couldn't find it so far either in textbooks or in math.stackexchange. Let $X$ be a metric space, and $$A=\bigcap^\infty_{n=1}A_n$$ a $G_\delta$ subset of $X$, where $A_n\subset X$ is open for each $n\in\mathbb{N}$. We assume for simplicity that $A_n\supset A_{n+1}$ for all $n\in\mathbb{N}$.
Question: Given any $B\subset X$ open such that $B\supset A$, is there an $n\in\mathbb{N}$ such that $B\supset A_n$?
AI: Not necessarily.
Let $X=\Bbb R\setminus\{1\}$ under the subspace topology, $B=(0,1),$ and each $A_n=X\cap\left(0,1+\frac1n\right).$ Then $A=B,$ but no $A_n$ is a subset of $B$.
It may be true if we require the metric space to be complete, but I'm not sure. I'll think on it.
Edit: Drat! I see that Ittay already posted the super-nice counterexample I was about to add. And now Asaf has posted another (which hadn't occurred to me).
The upshot is that if the $G_\delta$ set $A$ is open, and is a proper subset of the $A_n$s, then letting $B=A$ gives a counterexample.
Second Edit: It occurs to me (belatedly, of course!), that my original example can be easily adapted into a counterexample where $X$ is complete, connected, and $A$ is non-empty.
Instead, put $X=\Bbb R$ in the usual (completely metrizable and connected) topology, $B=(0,1),$ and $A_n=(0,1)\cup\left(1,1+\frac1n\right).$ (Aside from $X$, these are all exactly the same sets.) |
H: The probability of getting more heads than tails in a coin toss
A fair coin is to be tossed $8$ times. What is the probability that more of the tosses will result in heads than will result in tails?
$\textbf{Guess:}$ I'm guessing that by symmetry, we can write down the probability $x$ of getting exactly $4$ heads and $4$ tails and then calculate
$\dfrac{1}{2}(1-x)$.
So how does one calculate for $x$? I know that it should be a rational number (that is, $\dfrac{?}{2^8}$), but I am not sure how to get the numerator.
AI: The number in the numerator should be $\displaystyle \left( \begin{array}{c} 8 \\ 4 \end{array} \right) = \frac{8!}{4! ( 8-4)!} = 70$.
Why? Because we have $8$ tosses, and out of these tosses, we have $4$ heads. |
H: simplifying $\min(\max(A,B),C) $
In a larger problem, I have to make use of the following
$$\min(\max(A,\ B),\ C)$$
Please how do I simplify?
AI: If you wanna use the notation in stochastic caculus I saw sometimes, $max(A,B) = B + (A-B)_+$, then $$min(B+(A-B)_+, C) = C - (C - B - (A-B)_+)_+$$ |
H: Determine if vectors are linearly independent
Determine if the following set of vectors is linearly independent:
$$\left[\begin{array}{r}2\\2\\0\end{array}\right],\left[\begin{array}{r}1\\-1\\1\end{array}\right],\left[\begin{array}{r}4\\2\\-2\end{array}\right]$$
I've done the following system of equations, and I think I did it right... It's been such a long time since I did this sort of thing...
Assume the following:
\begin{equation*}
a\left[\begin{array}{r}2\\2\\0\end{array}\right]+b\left[\begin{array}{r}1\\-1\\1\end{array}\right]+c\left[\begin{array}{r}4\\2\\-2\end{array}\right]=\left[\begin{array}{r}0\\0\\0\end{array}\right]
\end{equation*}
Determine if $a=b=c=0$:
\begin{align}
2a+b+4c&=0&&(1)\\
2a-b+2c&=0&&(2)\\
b-2c&=0&&(3)
\end{align}
Subtract $(2)$ from $(1)$:
\begin{align}
b+c&=0&&(4)\\
b-2c&=0&&(5)
\end{align}
Substitute $(5)$ into $(4)$, we get $c=0$.
So now what do I do with this fact? I'm tempted to say that only $c=0$, and $a$ and $b$ can be something else, but I don't trust that my intuition is right.
AI: You just stopped too early:
Since you have 3 varibles with 3 equations, you can simply obtain $a,b,c$ by substituting $c = 0$ back into the two equations:
From equation $(3)$, $c = 0 \implies b = 0$.
With $b = 0, c = 0$ substituted into equation $(1)$ or $(2)$, $b = c = 0 \implies a = 0$.
So in the end, since
$$\begin{equation*}
a\left[\begin{array}{r}2\\2\\0\end{array}\right]+b\left[\begin{array}{r}1\\-1\\1\end{array}\right]+c\left[\begin{array}{r}4\\2\\-2\end{array}\right]=\left[\begin{array}{r}0\\0\\0\end{array}\right] \end{equation*}\implies a = b = c = 0, $$
the vectors are linearly independent, based on the definition(shown below).
The list of vectors is said to be linearly independent if the only $c_1,...,c_n$ solving the equation $0=c_1v_1+...+c_nv_n$ are $c_1=c_2=...=c_n=0.$
You could have, similarly, constructed a $3\times 3$ matrix $M$ with the three given vectors as its columns, and computed the determinant of $M$. Why would this help? Because we know that if $\det M \neq 0$, the given vectors are linearly independent. (However, this method applies only when the number of vectors is equal to the dimension of the Euclidean space.)
$$M = \begin{bmatrix} 2 & 1 & 4 \\ 2 & -1 & 2 \\ 0 & 1 & -2 \end{bmatrix}$$
$$\det M = 12 \neq 0 \implies\;\text{linear independence of the columns}.$$ |
H: Why is the line between two points called the line of the "secant"?
The definition of the slope of the line of the secant is:
slope = $\frac{y2-y1}{x2-x1}$
The definition of the slope of the tangent line is:
$\lim_{h->0}\frac{f(x+h)-f(x)}{h}$
I understand why they call it the tangent line since the angle to the x axis will be $tan(\theta) =\frac{Opp}{Adj}$ equivalent to opposite of adjacent.
Secant is the inverse trig function of cosine, so $\sec(\theta)=\frac{Hyp}{Adj}$
But I don't understand how secant is related to the slope of its line? I looked it up and I found out that the word secant comes from the Latin word secare, which means to cut. But is there any relation to secant and it's angle?
AI: Because you can define $sec(\theta)$ as a length on the unit circle. $sec$ corresponds to the length of the line from $(0,0)$ to $(1, \tan(\theta))$ and $tan$ corresponds to the length of the segment from $(1,0)$ to $(1, \tan(\theta))$. See the figure here. Clearly the $sec$ segment cuts the circle and $tan$ is tangent to it. |
H: Every bounded non countable subset of $\mathbb{R}$ has a two-sided accumulation point.
Inspired on proving that every compact set of the Sorgenfrey line is countable.
Trying to prove any of these in $\mathbb{R}$: 1) Every bounded non countable set has a both-sided accumulation point. That is, I supposed a stronger version of Bolzano's theorem.
2) Every bounded non countable set, has an interval of irrational numbers as subset. And I think its almost the same as saying that the bounded-non-countable set has a subset where every point is an accumulation point.
First of all they might not be true(thou I fell very sure they are), and second I dont have yet the tools to prove it even knowing that its not that complicated.
AI: Rudin - Principals of Mathematical Analysis - Page 45 - Problem 27
All but a countable number of points are accumulation points in every uncountable subset of $\mathbb{R}^n$.
In fact, all but a countable number of points are condensation points in every uncountable subset of $\mathbb{R}^n$. A condensation point of a set $E$, is a point in which every neighborhood contains uncountably many points of $E$. Obviously, if a point is a condensation point then it is an accumulation point.
Not sure exactly what you mean by "an interval of irrational numbers". However, it should be noted that it is entirely possible to define a subset of $\mathbb{R}$ that is uncountable and has no rational number as an accumulation point. |
H: The number of continuous real-valued functions on $[-1,1]$ such that $(f(x))^2=x^2$ for all $x$ in the domain
I've been thinking about the following problem:
How many continuous real-valued functions $f$ are there with domain $[-1,1]$ such that $(f(x))^2=x^2$ for each $x$ in $[-1,1]$?
I thought that there are two continuous real-valued functions on the interval $[-1,1]$ such that $(f(x))^2=x^2$, but why should there be four such functions?
Thank you.
AI: The condition is that $f(x)=\pm |x|$. Hence $x, -x, |x|, -|x|$ all fit the bill. |
H: Must $X$ be lindelöf if it has countable network?
A network is like a base, except that its members need not be open sets.
A family $\mathcal N$ of subsets of a topological space $X$ is a network for $X$ if for every point $x\in X$ and any neighbourhood $U$ of $x$ there exists an $M \in \mathcal N$ such that $x\in M \subset U$.
Suppose $X$ has a countable network. Must $X$ be Lindelöf?
AI: Yes. Let $\mathscr{N}$ be a countable network for $X$, and let $\mathscr{U}$ be an open cover of $X$. For each $x\in X$ choose $N_ x\in \mathscr{N}$ such that $x\in N_x\subseteq U$ for some $U\in\mathscr{U}$. Let $\mathscr{N}_0=\{N_x:x\in X\}$; clearly $\mathscr{N}_0$ is countable and covers$X$. $\mathscr{N}_0$ refines $\mathscr{U}$, so for each $N\in\mathscr{N}_0$ we can choose a $U_ N\in\mathscr{U}$ such that $N\subseteq U_N$. Then $\{U_N:N\in\mathscr{N}_0\}$ is a countable subcover of $\mathscr{U}$. |
H: What is the importance of examples in the study of group theory?
When I study topics in group theory (I am currently following Dummit and Foote) I don't care about examples so much. I read them, try to understand the applications of the theorems and corollaries on the examples. Most of the examples are about $D_{2n}, S_n$ and $\mathbb{Z}_n$, etc.
However, if I have difficulty with them, sometimes I skip them (although I mostly try hard to understand them first). The same happens with exercises, I care more about questions which ask me to prove general statements - in the form of corollaries or theorems. For example: "prove that $G$ is abelian if $G/Z(G)$ is cyclic"
So, my question is, what is the importance of examples of group theory? Should I care more about examples? Or will I be okay if I understand the definitions, theorems, corollaries and solve the proof-based exercises well?
Also, is there a reference - a book or a collection of sites - which talks about the groups only?
I mean it talks about, for example, $D_{2n}$, Matrix groups , $S_n$ , Klein four-group etc and the relations between those groups, their properties, how they act on quotients, the relation between the center and factor of the group and their subgroups, their normal subgroups, their sylow p-subgroups and so on.
If no such reference exists, then I have to collect the information of each group from the sections and exercises and try to summarize these information of each group which is mentioned in the text.
AI: There is some pedagogical value to understanding how theorems/definitions apply to and playout with particular groups (i.e., through examples). While it is admirable to yearn for a thorough theoretical understanding of group theory, try to think of studying and probing the examples as a means of testing your understanding of the theories and definitions, as they relate to specific groups. Being able to construct a proof is wonderful, but being able to use and apply your learning is crucial to obtaining a fluency with what you're learning. Being able to understand and apply the theorems and definitions is also a test of your comprehension, and will facilitate the proof-writing process as well. For example, being able to construct and anticipate counterexamples is crucial to proof-writing, and counterexamples will be hard to come by if you don't understand how particular groups exemplify or fail to exemplify group properties.
A great website is the website Groupprops - "A Group Properties Wiki". You'll find a menu to the left:
Terms/definitions
Facts/theorems
Survey articles
Specific information
and can quickly look-up (almost) anything/everything you'd want to know about (almost) any group. |
H: Probability of being matched against a pair of people
$\textbf{Question:}$ Suppose you are playing a game in which two teams of five people, call them Team A and Team B, compete. Each of the ten people is randomly assigned a unique role (no two people share the same role) from a pool of 100 different roles, call them Role 1, Role 2, $\dots$ , Role 100. Assume you are on Team A. What is the probability that Role 1 and Role 2 will be assigned to people on Team B?
$\textbf{Full Disclosure:}$ For those of you familiar with the game DotA 2, I'm wondering what is the probability of facing a specific pair of heroes, such as Keeper of the Light and Phantom Lancer, while playing All Random.
$\textbf{Attempted Solution:}$ The number of ways to choose ten roles from the pool of 100 is $\binom{100}{10}$. The total number of ways to choose ten roles that include Role 1 and Role 2 is $\binom{98}{8}$, so the probability that Role 1 and Role 2 are assigned to two of the ten players is $\frac{\binom{98}{8}}{\binom{100}{10}}$. The total number of ways to form two teams of five from ten people is $\binom{10}{5}$. If two of the ten people are assigned Role 1 and Role 2, the total number of ways to form teams of 5 such that they are on opposite teams is $\binom{8}{4}$, so the probability that the two people assigned Role 1 and Role 2 are on the same team is $1-\frac{\binom{8}{4}}{\binom{10}{5}}$. The probability that you are placed on either team is $\frac 1 2$. Thus, the probability that Role 1 and Role 2 are assigned to two of the ten people AND that those two people are on the same team AND that you are on the team facing them is $$\frac{\binom{98}{8}}{\binom{100}{10}}\left(1-\frac{\binom{8}{4}}{\binom{10}{5}}\right)\frac{1}{2}=\frac{13}{3960}$$
Is my solution correct? Also, let me know if anything can/should be clarified. Thanks!
AI: As you wrote, the probability that Role 1 and Role 2 are both chosen is
$$\frac{\binom{98}{8}}{\binom{100}{10}}.\tag{1}$$
Now line up the $10$ chosen roles, with say Roles 1 and 2 first. The probability Role 1 will be assigned to somebody on Team B is $\frac{5}{10}$. Given this has happened, the probability Role 2 is assigned to someone on Team B is $\frac{4}{9}$.
For our probability, multiply the result of (1) by $\frac{5}{10}\cdot\frac{4}{9}$.
Alternately, there are $\binom{10}{2}$ ways to choose the pair of people who will get Roles 1 and 2 (here we are just counting the number of pairs of people, not which one gets which role). And there are $\binom{5}{2}$ ways to choose them from Team B. Thus $\frac{5}{10}\cdot\frac{4}{9}$ can be replaced by $\dfrac{\binom{5}{2}}{\binom{10}{2}}$.
Remark: An analysis somewhat like yours could be made. Suppose Team A is to wear blue, and Team B is to wear red. Given that we have chosen the people who will get the various roles, we find the probability that the two particular people who got Roles 1 and 2 end up wearing red. There are $\binom{10}{5}$ ways to decide who will wear red. And there are $\binom{8}{3}$ ways of choosing a team that will wear red if two of the members are to be a certain two fixed people. So the required multiplier is
$$\frac{\binom{8}{3}}{\binom{10}{5}}.$$
Simplify. We get the right number. |
H: Weakly inaccessible cardinals and Discovering Modern Set Theory
So I've been trying to teach myself some set theory and I've come across some exercises in Just and Weese's Discovering Modern Set Theory. To whit:
Pg. 180
Definition 20: A cardinal $\kappa$ is called weakly inaccessible if $\kappa$ is an uncountable regular limit cardinal.
Exercise 27 (PG): Show that if $\alpha$ is a weakly inaccessible cardinal, then $\alpha=\aleph_\alpha$.
Exercise 28 (R): Show that the smallest ordinal $\alpha$ such that $\alpha=\aleph_\alpha$ is not a weakly inaccessible cardinal.
Ends
For the first I've proved by induction that $\alpha\subseteq\aleph_\alpha$ for all ordinals $\alpha$, obviously I have to use the weakly inaccessible part to go the other way, but I've no idea where to start. Weakly inaccessible gives that every function with cofinal range in $\alpha$ has domain at least $\alpha$ but that is saying that things are big, where as I need that $\aleph_\alpha$ is small.
The second is beyond me, I considered trying to show that ZFC proves the existence of such an ordinal $\alpha$, and then appeal to the fact that the existence of a weakly inaccessible cardinal is independent of ZFC, however this result is yet to be obtained in the text and so I imagine this can't be what is wanted.
Thoughts?
AI: The second one is easier, simply define by recursion $\lambda_0=\mu$, $\lambda_{n+1}=\aleph_{\lambda_n}$, for $n\in\omega$ ($\mu$ is any cardinal). Then calculate what is $\lambda_\omega=\sup\{\lambda_n\mid n\in\omega\}$.
The first one is not much harder either. Because $\aleph_\alpha$ is limit, $\alpha$ is a limit ordinal. Pick a cofinal sequence in $\alpha$, $\langle\delta_i\mid i<\operatorname{cf}(\alpha)\rangle$. Consider the sequence $\aleph_{\delta_i}$. What is its limit? What can you conclude on $\operatorname{cf}(\alpha)$ from the assumption that $\aleph_\alpha$ is regular as well? |
H: Counting the number of elements in the set $\{ x^{13n}:n \mbox{ is a positive integer}\}$ under certain conditions
A cyclic group of order $15$ has an element $x$ such that the set $\{x^3,x^5,x^9 \}$ has exactly two elements. How many elements are in the set $\{ x^{13n}:n \mbox{ is a positive integer}\}$?
I feel like some form of $\textbf{gcd}$ or $\textbf{lcm}$ is involved, but I could be wrong. Any form of hints will be greatly appreciated.
AI: Hints: The group is cyclic, and of order $15$. Keep in mind that one of $x^3,x^5,x^9$ is distinct from the other two, so we certainly can't have $x=e$.
If $x^3=x^9,$ then $x^6=e$, but the order of $x$ must divide the order of the group (and also divide $6$, in this case), and so the order of $x$ is $3$.
Show that the other two cases are impossible.
Thus, $x$ has order $3.$ What can we then say about the number of distinct values that powers of $x$ can take? Since $13n$ ranges through all values (modulo $3$) for positive integers $n$, what can we say about the number of values that $x^{13n}$ does take? |
H: determine whether graph is planar
This is not a HW question just a practice exercise in the text.
The question is to determine whether its planar or not. I dont think its Planar and I cant find a subgraph that is homeomorphic to $k_{3,3}$ or $K_5$. I have a feeling I might be wrong and if I am right i.e the graph is not planar then how do I Show its not Planar.
I would appreciate a hint.
AI: If you collapse h,i,j and also c,e then I think a,b,{c,e},d,{h,i,j} is a $K_5$
Edit: And so it isn't planar as it contains $K_5$ as a minor. |
H: probability of who will be selected
"In an office there are 3 secretaries, 4 accountants, and 2 receptionists. If a committee of 3 is to be formed, find the probability that one of each will be selected?
Attempted Solution:
First attempt: (3/9)(4/9)(2/9) = 8/243
Second attempt: (3/9)(4/8)(2/7) = 1/21
Don't if either of these are correct or not. Any help would be greatly appreciated to point me in the right direction. Thank you.
AI: The standard way to do this is to say there are $\binom{9}{3}$ equally likely ways to choose $3$ people from the $9$. And there are $\binom{3}{1}\binom{4}{1}\binom{2}{1}$ ways to choose one of each kind. For the secretary can be chosen in $\binom{3}{1}$ ways. For each such choice, the accompanying accountant can be chosen in $\binom{4}{1}$ ways. And once this has been done, the receptionist can be chosen in $\binom{2}{1}$ ways.
For the probability, divide "favourables" by "total." We get
$$\frac{\binom{3}{1}\binom{4}{1}\binom{2}{1}}{\binom{9}{3}}.$$
Another way: Imagine choosing the people one at a time. We find the probability of choosing a secretary, then an accountant, then a receptionist.
The probability the first person chosen was an S is $\frac{3}{9}$. Given this happened, the probability the next chosen person was an A is $\frac{4}{8}$. and given this happened, the probability the next person chosen was an S is $\frac{2}{7}$, for a probability of $\frac{3}{9}\cdot\frac{4}{8}\cdot\frac{2}{7}$.
But there are several other ways we could end up with one of each, for example A then S then R. If you calculate, this ends up having the probability $\frac{4}{9}\cdot\frac{3}{8}\cdot\frac{2}{7}$. This is the same number as the one previously obtained.
There are in total $3!=6$ ways we can end up with one of each. so the required probability is
$$6\cdot \frac{3}{9}\cdot\frac{4}{8}\cdot\frac{2}{7}.$$
The first way is (after you get used to it) easier. |
H: Mutually Exclusive Events (or not)
Suppose that P(A) = 0.42, P(B) = 0.38 and P(A U B) = 0.70. Are A and B mutually exclusive? Explain your answer.
Now from what I gather, mutually exclusive events are those that are not dependent upon one another, correct? If that's the case then they are not mutually exclusive since P(A) + P(B) does not equal P(A U B). If it was P(A U B) = 0.80 only then it would have been considered mutually exclusive. Correct?
AI: No, mutually exclusive events are events that cannot occur simultaneously: they are disjoint. If $A$ and $B$ are disjoint, then $\Bbb P(A\cup B)=\Bbb P(A)+\Bbb P(B)=0.42+0.38=0.80$. That’s not the case here, so $A$ and $B$ are not mutually exclusive.
In other words, your calculations and probably your reasoning are correct, but your use of terminology is not: mutually exclusive does not mean independent. |
H: Content of a polynomial
Define the content of a polynomial (over an arbitrary commutative ring $A$) to be the ideal generated by its coefficients, denoted $c(f)$. I want to show that
$$c(fg) = c(f)c(g).$$
(I was told this is true.)
What I was able to show was that $c(fg) \subseteq c(f)c(g)$ (this is obvious), and that their radicals are the same. My reasoning for the latter was as follows: let $f = a_0 + \dotsb + a_nx^n,$ $g = b_0 + \cdots + a_mx^m,$ and consider the matrix
$$
\begin{pmatrix}
a_0 b_0 & \cdots & a_0b_m\\
\vdots & \ddots & \vdots\\
a_nb_0 & \cdots & a_nb_m
\end{pmatrix}
.$$
Then $c(f)c(g)$ is generated by the entries of this matrix, while $c(fg)$ is generated by the sums along the diagonals. Let $\mathfrak p$ be a prime ideal of $A$ not containing $c(f)$ or $c(g)$, and let $i,\,j$ be minimal such that $a_i,\, b_j \notin \mathfrak p$. Then all the terms in the generator of $c(fg)$ corresponding to the coefficient of $x^{i+j}$ are in $\mathfrak p$ except for $a_ib_j$, so that $c(fg) \not\subset\mathfrak p$. It follows that $c(fg) \subseteq \mathfrak p \Longleftrightarrow c(f)c(g) \subseteq\mathfrak p$, and hence $\sqrt{c(fg)} = \sqrt{c(f)c(g)}$.
Can I go all the way and show that in fact $c(fg) = c(f) c(g)$?
AI: The displayed equation is not true in general. For example, let $A=k[t,u]$, $k$ a field and $t, u$ indeterminates. Let $f=t+ux$ and $g=t-ux$. Then $c(fg) = c(t^2 - u^2 x^2) = (t^2, u^2)$, but $c(f)c(g) = (t,u)^2 = (t^2, tu, u^2)$, a strictly larger ideal of $A$.
A ring $A$ for which the displayed equation does hold for all $f,g \in A[x]$ is sometimes called a Gaussian ring. It is closely related to the condition of being a Prüfer domain, and indeed holds whenever $A$ is Prüfer. |
H: Volume of a Parallelpiped, Homework Check
I've not sure if I am doing these types of questions properly, so I figured I would ask here. Find the volume of the parallelpiped with sides u, v, and w:
$u = 3, 1, 2$
$v = 4, 5, 1$
$w = 1, 2, 4$
(v x w) =
$\begin{align}
\begin{bmatrix}
4 & 5 &1\\
1 &2 & 4
\end{bmatrix}
\end{align}$
= $\begin{align}
\begin{bmatrix}
5 & 1\\
2 & 4
\end{bmatrix}
\end{align}$
-$\begin{align}
\begin{bmatrix}
4 & 1\\
1 & 4
\end{bmatrix}
\end{align}$
+$\begin{align}
\begin{bmatrix}
4 & 5\\
1 & 2
\end{bmatrix}
\end{align}$
$= |18, -15, 3|$
$= u(v x w) = -54 + 15 -6$
$= -33$
(Pretty sure I did this wrong, as you can't get a negative volume)
$u = 2, -6, 2$
$v = 0, 4, -2$
$w = 2, 2, -4$
(v x w) =
$\begin{align}
\begin{bmatrix}
0 & 4 &-2\\
2 &2 & 4
\end{bmatrix}
\end{align}$
= $\begin{align}
\begin{bmatrix}
4 & -2\\
2 & -4
\end{bmatrix}
\end{align}$
-$\begin{align}
\begin{bmatrix}
0 & -2\\
2 & -4
\end{bmatrix}
\end{align}$
+$\begin{align}
\begin{bmatrix}
0 & 4\\
2 & 2
\end{bmatrix}
\end{align}$
$= |-12, 4, -8|$
$= u(v x w) = -26 + 24 + 16
$= 66$
AI: You want to be careful with your notation, stick $i,j,k$ in where appropriate. i.e.
$(a,b,c)\times (d,c,e)=\begin{vmatrix} i & j & k\\ a & b & c\\ d & e &f \end{vmatrix}
= i \begin{vmatrix} b & c\\ e & f\end{vmatrix} - j \begin{vmatrix} a & c\\ d & f\end{vmatrix} + k \begin{vmatrix} a & b\\ d & e\end{vmatrix}$
As for the sign error notice that $A\cdot (B\times C)=A\cdot (-(C\times B))=-A\cdot (C\times B)$ so when you are interpreting this as a volume make sure to take the absolute value.
In the first part it looks to me like you dotted $-u$ with $v\times w$ not $u$. |
H: Solving a Linear System
I'm reading through my textbook, and it gives the linear system:
$a+2b-c+d=0$
$2a+3b+c+d=0$
$3a-b+2c+d=0$
It doesn't explain how this is solved, they just provide the answer which they come up with:
$a = -\frac{9}{16}t$
$b = -\frac{1}{16}t$
$c = \frac{5}{16}t$
$d = t$
I don't understand how the author came up with this solution, could anyone explain this? Thanks.
AI: Hint: use Gaussian elimination to solve the system:
$$\begin{bmatrix}
1 & 2 & - 1 & 1 \\
2 & 3 & 1 & 1 \\
3 & -1 & 2 & 1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b \\
c \\
d
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
0 \\
\end{bmatrix}$$
This can be done by applying Gaussian elimination to row reduce the augmented matrix:
$$\begin{bmatrix}
1 & 2 & - 1 & 1 & 0 \\
2 & 3 & 1 & 1 & 0\\
3 & -1 & 2 & 1 & 0 \\
\end{bmatrix}$$
As I mentioned in a previous answer, if you are not familiar with this, this Wikipedia article is quite helpful: http://en.wikipedia.org/wiki/Gaussian_elimination and you may also find Paul's notes useful: http://tutorial.math.lamar.edu/Classes/Alg/AugmentedMatrix.aspx.
Of course, we can see that you have a system of 3 equations with 4 unknowns, and so (at least) one variable must be free and hence determines the scaling parameter which determines the other 3 values. |
H: How to derive the expected value of $X^\alpha\log X$
Let $X$ follow a Weibull distribution, with density $$f(x)=\frac{\alpha}{\theta}x^{\alpha-1}e^{-\frac{x^{\alpha}}{\theta}}\quad x>0 .$$
How can I find the following expectation?
$$E[X^{\alpha}\log X]$$
The answer given in the paper by Debasis Kundu "Estimation of $P(Y\le X)$ for Weibull distribution" (page 9) is given below. Paper link: http://home.iitk.ac.in/~kundu/paper112.pdf
$$\frac{1}{\alpha}\theta[\ln(\theta)+\Gamma(2)]$$
Any suggestions? Many thanks.
AI: I get the following result:
$$E(X^{\alpha} \log{X}) = \frac{\theta}{\alpha} (\Gamma'(2) + \log{\theta})$$
get this from
$$\begin{align}E(X^{\alpha} \log{X}) &= \frac{\alpha}{\theta}\underbrace{\int_0^{\infty} dx \, \log{x} \, x^{2 \alpha-1} \, e^{-x^{\alpha}/\theta}}_{y=x^{\alpha}}\\&= \frac{1}{\alpha \theta} \underbrace{\int_0^{\infty} dy \, y\, \log{y} \, e^{-y/\theta}}_{\text{integrate by parts}}\\ &= \frac{1}{\alpha} \int_0^{\infty} dy \, e^{-y/\theta} (1+\log{y})\\&=\frac{\theta}{\alpha} + \frac{1}{\alpha} \underbrace{\int_0^{\infty} dy \log{y} \, e^{-y/\theta}}_{y=u \theta}\\&= \frac{\theta}{\alpha}+\frac{\theta}{\alpha} \log{\theta} + \frac{\theta}{\alpha} \underbrace{\int_0^{\infty} du \, \log{u} \, e^{-u}}_{\text{this is equal to } -\gamma} \\ &= \frac{\theta}{\alpha} (1-\gamma + \log{\theta})\\ &= \frac{\theta}{\alpha} (\Gamma'(2) + \log{\theta}) \end{align}$$
where $\gamma \approx 0.5772$ is the Euler-Mascheoni constant. |
H: How Would You Interpret This Question?
A man has 5 coins, two of which are double-headed,
one is double-tailed, and two are normal.
He picks a coin at random and tosses it.
What's the probability that the lower face is a tail?
P(H) = P(H|2H)P(2H) + P(H|2T)P(2T) + P(H|N)P(N) = $\dfrac {3}{5}$
He sees that the coin is showing head.
What's the probability that the lower face is a head?
P(${H_l}|{H_h}$) = $\dfrac {P({H_l} \cap {H_h})}{P({H_h})}$ = $ \dfrac {3}{4} $
I got those questions. Now, the man tosses the coin again.
What is the probability that the lower face is a head?
My instructor gave the following:
Let ${H_h}{H_l}$ be the event that he sees a head up on the first toss and a head down on the second toss.
P(${H_h}{H_l}|{H_h}$) = $\dfrac {P({H_h}{H_l} \cap {H_h})}{P({H_h})}$ = $\dfrac {P({H_h}{H_l})}{P({H_h})}$ = $\dfrac {5}{6}$
I understand his calculation, but his interpretation is different from mine.
Since he isn't holding the two-tailed coin, P(2H) = P(N) = $\dfrac {1}{2}$
$P({H_l})$ = $P({H_l}|2H)P(2H) + P({H_l}|N)P(N)$ = $\frac {1}{2} + \frac {1}{2} (\frac{1}{2})$ = $\frac {3}{4}$
Can you comment if my answer is entirely wrong or if I simply interpreted the question differently? Thanks!
AI: Your answer is wrong, I’m afraid: you’re twice as likely to be holding a double-headed coin as a normal coin. You know that you’re not holding the double-tailed coin, so when you flipped it the first time, there were $8$ possible outcomes, all equally likely, corresponding to the $8$ sides of the $4$ coins that have at least one head. Six of those outcomes are heads, and of those $6$, $4$ come from the double-headed coins and only $2$ from the normal coins. Thus, the odds are $4:2$ that your coin is double-headed. In terms of probability that means that with probability $\frac23$ you have a double-headed coin and are certain to get a head on the next toss, and with probability $\frac13$ you have a normal coin and a probability of $\frac12$ of getting a head on the next toss. Your overall probability of getting a head on the second toss is therefore
$$\frac23\cdot1+\frac13\cdot\frac12=\frac56\;.$$
It takes a bit of finagling to devise a situation in which your calculation gives the right answer. Suppose that after you toss the first coin, an observer says, ‘Okay, that eliminates the double-tailed coin as a possibility. Here, let me eliminate it from the set.’ He then throws it away and lets you pick one of the remaining $4$ coins at random. In this scenario it’s true that you get a normal coin with probability $\frac12$ and a double-headed coin with probability $\frac12$, so your probability of getting a head on your second toss (with a possibly different coin) is
$$\frac12\cdot1+\frac12\cdot\frac12=\frac34\;.$$ |
H: Need help proving the determinant of a particular sum of matrices.
I'm just learning how to use Mathematica and I was screwing around with it and I noticed that the following expression holds for a bunch of numbers that I threw into it.
I was wondering if someone could help me prove/disprove this?
$det(cJ_n+I)=cn+1$
Where $c\in \mathbb{R}$ is some constant
$J_n$ is the $n\times n$ matrix of ones
$I$ is the $n\times n$ identity matrix
I feel like there might be something simple here, though I'm not really sure how to approach determinants of sums. I'm a first year undergrad, FWIW.
Hints/help is much appreciated!
AI: The easiest way to see this is by changing the basis and noting that this leaves the determinant unchanged. I'll do it for $c=1$, you can do it analogously for other $c$. By inspection the eigenvalues are $n$ with multiplicity $1$, and $0$ with multiplicity $n-1$, with corresponding eigenvectors $(1,1,...,1)$ and $(1,-1,0,...,0)$, $(1,0,-1,0,...0)$,...,$(1,0,...,0,-1)$, respectively. Note that these eigenvectors form a basis for the vector space. The identity matrix looks the same in any basis, so diagonalize $J_n$, so that it has $n$ in the top-left corner, and is zero everywhere else. Then $$J_n+I=\begin{bmatrix}
n+1 & 0 & \cdots & 0\\
0 & 1 & \cdots & 0 \\
\vdots & 0 & \ddots &\vdots \\
0 & 0 & \cdots & 1
\end{bmatrix}$$
and so the determinant is $n+1$. |
H: probability hypergeometric distribution
An HR manager estimates that 35% of married employees in a large office complex have spouses whose employers provide dental insurance and 65% have spouses whose employers provide extended medical and drug insurance. Of those whose spouses have dental insurance, 90% have also extended medical and drug insurance.
a) What is the probability that a randomly selected married employee has a spouse who has both types of insurance?
Given:
Dental = 35%
Medical and Drug = 65%
Dental, medical and drug = 90% of medical and drug = 58.5%
So therefore, medical and drug ONLY = 6.5%
So the probability of (a) would be 58.5%?
b) What is the probability that a married employee has a spouse who has neither type of insurance?
Would this be ZERO? Since there is no mention of employees whose spouses are without insurance. Rather it states that all married employees have spouses who have SOME type of insurance.
I can't seem to use the nCr theorem when only percentages are given with no fixed range. I have been stuck on this since the past 60 minutes and I don't know where to start from. How can I use the nCr probability distribution theorem for this particular type of a question? Support would be greatly appreciated.
AI: Let $D$ be the event "spouse has dental insurance" and $E$ be the event "spouse has extended medical." The two events overlap: some spouses have both.
Note that $\Pr(D)=0.35$ and $\Pr(E)=0.65$. (These add up to $1$. That's an "accident," and perhaps deliberately chosen to increase confusion. Not nice!)
We are told that $90\%$ of the people in $D$ are also in $E$. So the probability of both is $(0.35)(0.9)$. That's the first answer.
For the second question, we want to find the probability of landing outside both $D$ and $E$.
Note that if we add $0.35$ and $0.65$, we have counted the people who are in both twice. so the probability that a person is in both $D$ and $E$ is
$$0.35+0.65-(0.35)(0.9).\tag{1}$$
The probability of landing outside both $D$ and $E$ is therefore $1$ minus the number in (1). This is
$$1-\left(0.35+0.65-(0.35)(0.9) \right).$$
This happens to simplify considerably.
*Remark: We are told something that lets us find that $\Pr(D\cap E)=(0.35)(0.9)$. That's the answer to the first question.
Now use the formula
$$\Pr(D\cup E)=\Pr(D)+\Pr(E)-\Pr(D\cap E)$$
to calculate $\Pr(D\cup E)$. For the second question, we want the probability of neither $D$ nor $E$, that is, the probability of the complement of $D\cup E$. This is $1-\Pr(D\cup E)$. |
H: Solve the following in non-negative integers: $3^x-y^3=1$.
Solve the following in non-negative integers: $$3^x-y^3=1$$
Of course $(x,y)=(0,0)$ is a trivial solution. After seeing that I proceeded like this:
$$3^x-y^3=1$$$$\implies3^x-1=y^3$$$$\implies2(3^{x-1}+3^{x-2}+ \cdots +3^1+1)=y^3$$$$\therefore2|y$$
So let $y=2k$ for non-negative $k$.
Then $$3^{x-1}+3^{x-2}+ \cdots +3^1+1=4y^3$$ So $4|LHS$ using a mod argument one can easily deduce that this implies that $x$ is even so let $x=2q$.
Then going back to our original expression we have $$9^q-1=8k^3$$$$\implies 9^{q-1}+9^{q-2}+ \cdots +9+1=k^3$$
Then again using mod arguments I deduced that $k \equiv 4 \pmod9$. And after that things started to get more and more yucky. This could be a completely bad approach so I also triad looking at $(y+1)(y^2-y+1)=3^x$ but that didn't lead me anywhere either.
Thanks in advance for any help.
AI: If $y+1=3^a$ with $a\ge1\iff y>1$
$y^2-y+1=3(3^{2a-1}-3^a+1)$ which can not be power of $3$ unless $2a-1=a\iff a=1$ |
H: Using polynomials as recursions
I made this observation in my discrete math course a while back. I explored it further online, so not all the ideas contained are mine alone. I still am confused about some things, though. Consider the equation $x^2-x-1=0$ whose positive root is the golden ratio $\phi$. Rewriting the equation we get $x^2=x+1$, and for nonzero $x$ we can divide to get $x=1+\frac1x$. Now $x$ is defined recursively in terms of itself, so plugging it back into this defining equation we get $x=1+\frac1{1+\frac1x}$, and by continuing indefinitely we produce a continued fraction which converges exactly to $\phi$. Returning to the quadratic, we see also that if $x^2=x+1$, then by taking the square root we have $x=\sqrt{x+1}$; since again $x$ is defined recursively in terms of itself we can plug it back into the equation to get $\sqrt{\sqrt{x+1}+1}$ and so on, to produce an infinitely iterated root that also converges to $\phi$.
What is this process/idea called where you find a recursion that 'solves itself'? By that I mean that we had an expression for $x$ in terms of itself, which can be used iteratively to compute $x$. How can one recognize that iterated roots or fractions can be rewritten as polynomials and vice versa?
Also, I find this argument of 'plugging $x$ into itself' to be very ad-hoc and non-rigorous. Is there a more rigorous framework in which one could view this kind of thing? And is there a way to tweak this idea so that a recursion for the negative root of the quadratic can be found?
AI: You are discovering fixed point iteration, which can be a powerful technique. You arrange your equation in the form $x=f(x)$, then start with some $x_0$ that you hope is close enough to the solution and iterate $x_{n+1}=f(x_n)$ to convergence. If $f(x)$ is differentiable at the limit $L$ and $|f'(L)| \lt 1$ then it will converge in some neighborhood of $L$. Intuitively, the distance from the limit is multiplied by something close to $f'(L)$ each step. There are two parts of cleverness-you need to find a way of arranging your equation so $|f'(L)| \lt 1$ and you need to find a starting value that is close enough. If this process converges, you have $f(L)=L$. Many times this expression lets us find the limit, which is the opposite direction from your approach. For $x^2-x-1=0$ the quadratic formula gets us there, but suppose we wanted to solve $x^{10}+x-1=0$. We could think that if $x$ is a little smaller than $1$, then $x^{10}$ will be a lot smaller than $1$, so the solution is close to $x=1$ but a little smaller. So I will write the recursion as $x_{n+1}=\sqrt[10]{1-x_n}$ and start with $x_0=0.99$. It converges quickly to $x\approx 0.835079069$. A spreadsheet makes this easy-copy down is your friend. |
H: Is there a non-constant smooth function mapping $\mathbb{R}$ into $\mathbb{Q}$
I cannot think of a non-constant smooth function which maps all real numbers into rational numbers.
Can anyone give a simple example ? The simpler, the better !
AI: No. There isn't even a non-constant continuous function $\mathbb R \rightarrow \mathbb Q$. Recall a continuous function maps connected sets to connected sets. We know that $\mathbb R $ is connected, but $\mathbb Q$ is totally disconnected: each point is its own connected component. So any continuous map $\mathbb R \rightarrow \mathbb Q$ must be constant. |
H: Given that $\int_{a}^{b} f(x) dx\le M$ for all $a,b\in\mathbb{R}$
Given that for a continuous $f$, $\int_{a}^{b} f(x) dx\le M$ for all $a,b\in\mathbb{R}$
Then
i) $\int_{0}^{\infty} f(x) dx$ exists if $f\ge 0$
ii) $\int_{0}^{\infty} f$ exists if $f$ is differentiable
iii) $\int_{0}^{\infty} f$ exists if $f$ is differentiable and bounded
iv)$\int_{0}^{\infty} f$ exists.
for (ii) and (iii) (iv) take $f(x)=\sin x$ so (i) is only true?
AI: I am reproducing Andrew Salmon's comment above so that this does not go unanswered.
Yes, you are correct. If $f\ge 0$, then $g_n=\int_0^n f(x)\ dx$ is
monotonic and bounded above.
(...which means the limit as $n$ goes to infinity exists, so the indefinite integral exists.) |
H: A set is a finite chain if every subset has a top and bottom element
I am presently attempting Exercise 2 in Kaplansky, Set Theory and Metric Spaces
Exercise 2: Let $L$ be a partially ordered set in which every subset has a top and bottom element. Prove that $L$ is a finite chain.
Proof: Denote a subset of $L$ by $S$. If $S$ has a top and bottom element, then $\sup S$ and $\inf S$ exist and are elements of $S$. Denote them $a$ and $A$ respectively. Since $a,A \in S$ then $S$ is finite. This means that all the elements in $S$ can be ordered from smallest to largest, thus we have: $S= \{ a, ..., A \}$. Based upon this ordering, given any two elements, $b,c \in S$ one may determine that $b \le c$ or $c \le b$. Since all the subsets of L are a chain, this implies that L must also be a chain.
My Question: I am uncertain on whether the step in bold is valid. I believe it is based upon the transitivity condition in the definition for a partially ordered set, though I would appreciate some feedback on the matter.
AI: Given any two elements $x,y\in L,$ we know that $\{x,y\}$ has a top element and a bottom element. This shows that comparability holds on $L$, so since $L$ is a poset, then $L$ is a linearly ordered set.
Now, suppose that $L$ is not finite. Let $x_0$ be the top element of $L$, and for any nonnegative integer $n,$ let $x_{n+1}$ be the top element of $L\setminus\{x_0,...,x_n\}.$ Show that $\{x_n:n\text{ a nonnegative integer}\}$ is a subset of $L$ without a bottom element--the desired contradiction (you'll need the assumption that $L$ isn't finite).
P.S. (Added later): The actual claim should read "...every non-empty subset has...," for obvious reasons. In fact, we can adjust the claim as follows:
Let $L$ be a partially ordered set. Then $L$ is a finite chain if and only if every non-empty subset of $L$ has a top element and a bottom element.
The forward direction can be proved by induction, though this may vary in difficulty, depending on which definition of "finite" is being used. |
H: Where is the fallacy in the argument using Prime Number Theorem
I am reading about Prime Number Theorem from book by Ingham. As as application of PNT I found the following theorem:
Now my doubt is at the step $\frac{\log(y)}{\log(x)}\rightarrow 1$, we can say $\log(y)\rightarrow\log(x)$ and if I apply antilog I get $y\rightarrow x$ which is not true by PNT.
So where is the fallacy in my argument?
I think it is because we are neglecting $\log\log x$ term, but I am not sure and I am unable to find a suitable argument.
Also we are applying $\log$ on PNT even though it is kind of limit, So why cant we apply antilog. Are there any conditions to apply some function for these kind of cases?
AI: The arrow "$\to$" in $\frac{\log y}{\log x} \to 1$ is not the same as an equality sign ($=$). It means (see the "when $x \to \infty$" a couple of lines above) that $$\lim_{x \to \infty} \frac{\log y}{\log x} = \lim_{x \to \infty} \frac{\log \pi(x)}{\log x} = 1.$$
You cannot conclude from this that $\log y \to \log x$, simply because that statement isn't very meaningful. (E.g. if you use it to mean that $\displaystyle \lim_{x \to \infty} \log y = \log x$, then this is nonsensical as $x$ is a varying (increasing) variable on the left-hand-side of the equation, and what value does it take on the right-hand-side?)
As an example, consider the statement that $$\lim_{n \to \infty} \frac{n+1}{n} = 1.$$ This is a true statement, but you cannot use $\frac{n+1}{n} \to 1$ to conclude that $n+1 \to n$, nor can you then subtract $n$ from both sides to say that $1 \to 0$.
Edit: I see that there is some scope for confusion, because in the proof quoted in the question, they do seem to perform operations that look like they're taking logs on both sides of "$\to$" and treating it like an equality sign, etc. But this is not actually what they're doing, so let's rewrite the proof without the "$\to$" symbol to be clear. With $y = \pi(x)$, we have, by the Prime Number Theorem, $$\lim_{x \to \infty} \frac{y \log x}{x} = 1.$$
Now, we can "take logarithms" by using the fact that $\lim_{x \to \infty} \log g(x) = \log \lim_{x \to \infty} g(x)$ at good places, which is basically a consequence of the fact that $\log$ is a continuous function: see chain rule. So we get
$$\lim_{x \to \infty} \log \frac{y \log x}{x} = \log 1 = 0.$$
This is the same as
$$\lim_{x\to\infty} (\log y + \log \log x - \log x) = 0$$
Dividing throughout by $\log x$ (product rule), we get
$$\lim_{x\to\infty} (\frac{\log y}{\log x} + \frac{\log \log x}{\log x} - \frac{\log x}{\log x}) = 0$$
Now using the facts $\frac{\log x}{\log x} = 1$ and $\lim_{x \to \infty}\frac{\log \log x}{\log x} = 0$ and $\lim_{x \to \infty}(a(x) + b(x)) = \lim_{x \to \infty} a(x) + \lim_{x\to\infty} b(x)$ when all exist ("sum rule"), this becomes
$$\lim_{x\to\infty} \frac{\log y}{\log x} = 1$$
Finally, we have
$$\lim_{x\to\infty} \frac{y\log y}{x} = \lim_{x\to\infty} (\frac{y\log x}{x}\frac{\log y}{\log x}) = \lim_{x\to\infty} \frac{y\log x}{x} \lim_{x\to\infty} \frac{\log y}{\log x} = 1 \cdot 1 = 1$$
as both limits exist; this is the "product rule" for limits again.
Conclusion: By default, you cannot freely perform operations on limits and expect them to work out. There are certain things you can do, but this must be justified. (BTW, taking antilog of both sides is something you can do: e.g. from $\displaystyle \lim_{x\to\infty}\frac{\log y}{\log x} = 1$, you can conclude that $\displaystyle \lim_{x\to\infty} e^{\frac{\log y}{\log x}} = e$. It's just that the expression $\log y \to \log x$ is meaningless.)
So rather than asking why we can't do certain things, the right question is to ask why we can do the operations we do, and examine them closely. |
H: Probability Theory (Pinochle deck of cards)
-=Attempts added=-
A Pinochle deck is a special deck of cards with 48 cards in total. it consists of two copies of each of the 9, 10, J, Q, K and Ace of all four suits (so there are 2 nine of clubs, 2 nine of diamonds, 2 nine of hearts, two nine of spades, and so on for ever other denomination). Poker can be played with the Pinochle deck, but the probabilities are different from poker with a standard deck. Calculate the following from the Pinochle deck:
a) Total number of 5-card Poker hands possible (order does not matter)
Attempt: 48C5 = 1712304 (I believe this number includes repetitions, how do I get rid of those?)
b) Probability of a four of a kind (4 cards of the same denomination plus one card of a different denomination)
For this one, I know how to do it with a standard deck, as following, but not the Pinochle one. How would the numbers be changed? I know the technique is the same, but which numbers would end up changing?
Attempt: P(4 of a kind):_
Denominations: 13 C 1 x 12 C 1
Suits: 4C4 x 4C1
Total: 13C1 x 12C1 x 4C4 x 4C1 = 624
P(4 of a kind 52 cards) = 624/52C5 = 1/4165
So would the Pinochle probability be = 624/48C5 ?
c) Probability of a royal flush (A,K,J,Q, 10 all of the same suit)
Attempt: P(Royal Flush) = 8/(48 C 5) = 1/214038
AI: I am not familiar with Pinochle decks. So for the count I will assume that for example the two $\heartsuit$ Queens are identical.
Then there are $3$ types of $5$-card hands: (i) all cards distinct; (ii) there is $1$ pair of identicals; (ii) there are $2$ pairs of identicals. For the count, we calculate the number of each type, and add up.
(i) This is easiest. There are $24$ different cards, and there are $\binom{24}{5}$ ways to choose $5$.
(ii) There are $24$ different cards. We choose $1$ of the kinds to have $2$ of, and then $3$ different cards for the rest. That gives total $\binom{24}{1}\binom{23}{3}$.
(iii) This can be tricky. We pick $2$ of the kinds to have $2$ each of, and then an odd card. That gives $\binom{24}{2}\binom{22}{1}$.
Now for poker hand probabilities, we have to forget about the count we just made. For the different hands that we counted above are not equally likely. The Type (i) jands are all equally likely, as are the type (ii) hands, as are the type (iii) hands, but we do not have equal likelihood between different types.
There are two ways to proceed. We can take account of the different probabilities for each type. But my inclination is to imagine that the "repeats" are coloured red and blue, making the $48$ cards distinct. Then there are $\binom{48}{5}$ equally likely hands.
For each type of poker hand, count how many ways there are to produce it, taking colouring into account. I will leave the (unpleasant) details to you. We need to define carefully what we mean by each poker hand. For example, five of a kind is now possible.
Let's do a relatively straightforward one, $1$ pair. The type of card we have $2$ of can be chosen in $6$ ways, $9$ to Ace. Suppose for example it is $2$ Kings. They can be either $2$ of the same suit ($4$ choices) or of different suits. If so, the different suits can be chosen in $\binom{4}{2}$ ways, and then the actual cards in $2^2$ ways (all combinations of red and blue), for a total of $4+(6)(4)=28$ ways. Now we choose $3$ types from the remaining $5$, and for each choose a colour, red or blue. This gives a total of $(6)(28)(80)$. Check it. Now divide by $\binom{48}{5}$ for the probability. |
H: Lebesgue vs. Riemann integrable function
While trying to learn the difference between Lebesgue and Riemann integrals, I came across the following example:
$$\int_{0}^{1}t^\lambda\,\mathrm dt$$
What I know so far:
only for $\lambda>0$ the integral exists as a Riemann integral,
only for $\lambda>1$ it exists as an improper Riemann integral.
My question is: for which $\lambda$'s does it exist as a Lebesgue integral?
(I suspect it's the same as with the improper case since $|t^\lambda|=t^\lambda$, or is there more to it?)
AI: For $\lambda > -1$, use Monotone Convergence Theorem on $t^\lambda \chi_{[\frac{1}{n},1]}$, where $\chi$ is the characteristic function.
For $\lambda \leq -1 $, assume that the integral is finite. Then it is finite for each $t^\lambda \chi_{[\frac{1}{n},1]}$. But this integral gets arbitrarily large as $n$ goes to infinity which is a contradiction. |
H: Evaluating $\lim\limits_{n\to\infty}(a_1^n+\dots+a_k^n)^{1\over n}$ where $a_1 \ge \cdots\ge a_k \ge 0$
Need to find $\lim\limits_{n\to\infty}(a_1^n+\dots+a_k^n)^{1\over n}$ Where $a_1\ge\dots\ge a_k\ge 0$
I thought about Cauchy Theorem on limit $\lim\limits_{n\to\infty}\dfrac{a_1+\dots+a_n}{n}=\lim a_n$ and something like what happen if all $a_i=0$ or $a_1=\dots=a_k$, but may be something I am thinking wrong?
Maybe it is too simple but I am not getting it; please help.
AI: Note that
$$a_1=[a_1^n]^{1/n}\leq [a_1^n+\cdots+a_k^n]^{1/n}\leq [ka_1^n]^{1/n}=k^{1/n}a_1$$
and apply squeeze theorem. |
H: Affine transformation, if $L_1, L_2 - $ skew lines, $f(L_1), \ f(L_2) $ are parallel, then $f$ is not injective
Could you tell me how to prove that if $f$ is affine transformation, $L_1, L_2 $ are skew lines, $f(L_1), \ f(L_2) $ are parallel, then $f$ is not injective?
AI: Suppose $L_i = a_i + \mathbb Rv_i$ are the two lines and $f$ is given by $x \mapsto b + Ax$ with linear $A$. Then we have
$$ f(L_i) = b + Aa_i + \mathbb RAv_i, \quad i = 1,2 $$
Now, $L_1$ and $L_2$ being skew means $v_1$ and $v_2$ are linear independent, $f(L_1) \parallel f(L_2)$ gives that $Av_1$ and $Av_2$ are linear dependent, say $\lambda_1Av_1 + \lambda_2 Av_2 = 0$, for some $(\lambda_1, \lambda_2) \ne (0,0)$. As $\lambda_1v_1 + \lambda_2v_2 \ne 0$, $A$ is not injective, hence $f$ is not. |
H: Is a continuous map between smoothable manifolds always smoothable?
Let $X$ and $Y$ be topological manifolds and $f:X\to Y$ a continuous map.
Suppose $X$ and $Y$ admit a differentiable structure (at least one).
My question: is it always possible to choose a differentiable structure on $X$ and one on $Y$ in such a way that $f$ turns out to be differentiable?
I know the answer is yes in some cases, e.g. when $f:X\to Y$ is a covering projection. I suspect this is not true in general, but can't find a counterexample.
Thanks in advance.
AI: (Note that this answer is more general than can be obtained from any version of Sard's theorem that I know of, since it also rules out "differentiable with discontinuous derivative".)
$\mathbb{R}$ and $\mathbb{R}^2$ are obviously smoothable.
Let $\hspace{.01 in}\: f : \mathbb{R} \to \mathbb{R}^2 \:$ be a continuous extension of a space-filling curve.
Let $\: g : \mathbb{R} \to \mathbb{R} \:$ and $\: h : \mathbb{R}^2 \to \mathbb{R}^2 \:$ be charts for some smooth structure on $\operatorname{Dom}(\:f\hspace{.01 in})$
and $\operatorname{Codom}(\:f\hspace{.01 in})$, respectively. $\;\;$ Let $\: \pi : \mathbb{R}^2 \to \mathbb{R} \:$ be projection to the $x$-coordinate.
$h\circ f\circ g^{-1} \circ \pi \:$ is a function from $\mathbb{R}^2$ to itself, and forward image of the $x$-axis under that function is the unit square. $\:$ Since that $x$-axis has Lebesgue measure zero in $\mathbb{R}^2$ and the unit square does not,
it follows from the answer to my question here that $\: h\circ f\circ g^{-1} \circ \pi \:$ is not differentiable.
Since projection is obviously differentiable, this means $\: h\circ f\circ g^{-1} \:$ is not differentiable,
so $\hspace{.02 in}f$ is not differentiable with respect to the smooth structures on $\operatorname{Dom}(\:f\hspace{.01 in})$ and $\operatorname{Codom}(\:f\hspace{.01 in})$.
Therefore, since that argument works for any charts $g$ and $h$, $\hspace{.05 in}f$ cannot be made differentiable. |
H: Eigenvectors orthogonal to $j$
I'm studying the proof of the following statement:
$Spec(K_n) = (n-1)^1(-1)^{n-1}$
At some point I have:
By the Spectral Theorem, when looking for eigenvectors $v$ we can assume they are orthogonal to $j$.
$j$ being the all-1 vector.
I don't understand which part of the Spectral Theorem implies this?
AI: The spectral thoerem tells us, that a symmetric matrix (e. g. the adjacency matrix $A$ of the $K_n$) has a orthogonal basis of eigenvectors. As $j$ is an eigenvector for the eigenvalue $n-1$ (since each row of $A$ has row sum $n-1$), when looking for eigenvectors for $-1$ we must concentrate on $\{j\}^\perp$. |
H: Limit of double sum: $\lim\limits_{n\to\infty}n^{-2}\sum\limits_{k=1}^n\sum\limits_{m=k+1}^n\left(\frac{n-2k}{n+2k}\right)^2\frac{n-2m}{n+2m}$
Who is so kind to enlighten me about the steps I need to follow?
$$\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\sum_{m=k+1}^n\left(\frac{n-2k}{n+2k}\right)^2\frac{n-2m}{n+2m}$$
AI: We have a Riemann sum:
$$\lim_{n \to \infty} \frac{1}{n^2} \sum_{k=1}^n \left ( \frac{1-\frac{2k}{n}}{1+\frac{2k}{n}}\right)^2 \sum_{m=k+1}^n \frac{1-\frac{2m}{n}}{1+\frac{2m}{n}} = \int_0^1 dx \left (\frac{1-2 x}{1+2 x}\right )^2 \, \int_x^1 dy \frac{1-2 y}{1+2 y}$$
The evaluation of the above integral is straightforward but messy. The inner integral has an antiderivative
$$\begin{align}\int dy \frac{1-2 y}{1+2 y} &= \int \frac{dy}{1+2 y} - \int dy \frac{2 y}{1+2 y}\\ &= \frac12 \log{(1+2 y)} - \left [y - \frac12 \log{(1+2 y)} \right ]\\ &= \log{(1+2 y)}-y\end{align}$$
The integral is now a single integral when the inner integral is evaluated over its integration limits:
$$\int_0^1 dx \left (\frac{1-2 x}{1+2 x}\right )^2 \left [\log{3} - 1 + x - \log{(1+2 x)} \right ]$$
This integral may be evaluated by substituting $u=1+2 x$, $x=(u-1)/2$, to get
$$\frac12 \int_1^3 du \left (\frac{4}{u^2} - \frac{4}{u}+1\right ) \left [\log{3}-1+\frac{u-1}{2} - \log{u}\right]$$
Now,
$$\begin{align}\frac12 \int_1^3 du \left (\frac{4}{u^2} - \frac{4}{u}+1\right )(\log{3}-1) &= \left(\frac{7}{3}-2 \log{3}\right) ( \log{3}-1) \\ &= -\frac{7}{3} + \frac{13}{3} \log{3} - \log^2{3}\end{align}$$
$$\begin{align}\frac12 \int_1^3 du \left (\frac{4}{u^2} - \frac{4}{u}+1\right ) \log{u} &= 2 \int_1^3 du \frac{\log{u}}{u^2} - 2 \int_1^3 du \frac{\log{u}}{u} + \frac12 \int_1^3 du \, \log{u} \\ &= 2 \left [ - \frac{\log{u}}{u}\right]_1^3 + 2 \int_1^3 \frac{du}{u^2} - [\log^2{u}]_1^3 + \frac12 [u \log{u}-u]_1^3 \\ &= \frac13 + \frac{5}{6} \log{3} - \log^2{3}\end{align}$$
$$\begin{align}\frac12 \int_1^3 du \left (\frac{4}{u^2} - \frac{4}{u}+1\right ) \frac{u-1}{2} &= \frac14 \int_1^3 du \left (-\frac{4}{u^2} + \frac{8}{u} - 5+u \right )\\ &= 2 \log{3}-\frac{13}{6}\end{align}$$
Adding the above three results together, I get for the desired limit:
$$\lim_{n \to \infty} \frac{1}{n^2} \sum_{k=1}^n \left (\frac{n-2 k}{n+2 k} \right )^2 \, \sum_{m=k+1}^n \frac{n-2 m}{n+2 m} = -\log^2{3} + \frac{33}{6} \log{3} - \frac{29}{6}$$
which is about $0.002085$. |
H: Combinatorics question with infinite sets
I have made a claim that I now am trying to prove. The claim is:
if $f: S \times S \to P(S) $ is an injective function mapping pairs to subsets then there exists a pair $(a,b)$ with the property that $f(a,b) = s$ and $c \in s$ with $c \neq a$ and $c \neq b$.
I'm quite sure this claim is true. My idea is to prove it using the pigeon hole principle. Count the domain, $D = |S|^2$ and the range of sets that only are made up of pairs, $R = {|S| \choose 1} + {|S| \choose 2} = |S|+{|S| \choose 2}$. Then for $|S|>1$ it is true that $D > R$.
But this proof only works for finite $S$ because for infinite $S$, $|S|^2 = |S|$ and $R > D$. How to prove the case $S$ infinite?
AI: If $S$ is infinite it simply need not be true. For example, let $S=\Bbb Z^+$, and let
$$f\big(\langle m,n\rangle\big)=\{m,2^m3^n\}$$
for all $\langle m,n\rangle\in\Bbb Z^+\times\Bbb Z^+$. The map $\langle m,n\rangle\mapsto 2^m3^n$ is injective, so $f$ is injective, but $m\in f\big(\langle m,n\rangle\big)$ for all $\langle m,n\rangle\in\Bbb Z^+\times\Bbb Z^+$.
Added: It occurs to me that I may have misunderstood: I took the requirement to be that there is a pair $\langle a,b\rangle$ such that no $c\in f\big(\langle a,b\rangle\big)$ is either $a$ or $b$. If, however, you simply meant that there is some pair $\langle a,b\rangle$ such that $f\big(\langle a,b\rangle\big)$ contains at least one element other than $a$ and $b$, the conjecture is true for infinite as well as for finite sets.
To see this, suppose that $f\big(\langle a,b\rangle\big)\subseteq\{a,b\}$ for each $\langle a,b\rangle\in S\times S$. Fix distinct $a,b,c,d\in S$. The map $f$ must take the four ordered pairs $\langle a,a\rangle,\langle a,b\rangle,\langle b,a\rangle$, and $\langle b,b\rangle$ to subsets of $\{a,b\}$, and there are exactly four such subsets, $\varnothing,\{a\},\{b\}$, and $\{a,b\}$. Similarly, $f$ must take the four ordered pairs $\langle c,c\rangle,\langle c,d\rangle,\langle d,c\rangle$, and $\langle d,d\rangle$ to subsets of $\{c,d\}$, and there are exactly four such subsets, $\varnothing,\{c\},\{d\}$, and $\{c,d\}$. But that means that eight different ordered pairs must be sent to seven different sets, since $\varnothing$ is a subset of both $\{a,b\}$ and $\{c,d\}$. Thus, $f$ cannot be injective. |
H: Representation of $j$ in an orthonormal basis
I'm studying a proof and I have $j$ (the all-1 vector) represented in the basis of orthonormal vectors $\{v_1, ..., v_n\}$ such that
$$j = \sum_ic_i v_i$$
I don't understand why I then have:
$$j^Tv_i = c_i$$
and
$$\sum_ic_i^2 = n$$
AI: Note that $$j^Tv_i=\left(\sum_i c_iv_i^T\right)v_i=c_i v_i^Tv_i$$ because $v_j^Tv_i=0$ if $j\ne i$. Hence we have $j^T v_i=c_i$. For the second equality note that $$n=j^Tj=\left(\sum_i c_iv_i^T\right)\left(\sum_i c_iv_i\right)$$ Now expand the product to get $\sum_i c_i^2 $ (use the fact that $v_i^Tv_j=0$ if $i\ne j$ and $1$ otherwise). |
H: How can I evaluate $\int_0^\infty \frac{\sin x}{x} \,dx$? [may be duplicated]
How can I evaluate $\displaystyle\int_0^\infty \frac{\sin x}{x} \, dx$? (Let $\displaystyle \frac{\sin0}{0}=1$.)
I proved that this integral exists by Cauchy's sequence.
However I can't evaluate what is the exact value of this integral.
AI: It's a famous Dirichlet integral.
http://en.wikipedia.org/wiki/Dirichlet_integral |
H: variance inequality
Show that. for any discrete random variable X that takes on values in
the range [0,1]. Var[X] $\le$ 1/4.
I translate it into a inequality like this:
$x_1, x_2, x_3 \cdots ,x_n$ where $0 \le x_i \le 1$, and $p_1, p_2, p_3 \cdots ,p_n$ where $p_1+ p_2+ p_3 \cdots +p_n = 1$, prove that $\sum _1^nx_i^2p_i - (\sum _1^n x_ip_i)^2 \le {1\over 4}$ , how to prove it?
At first I tried cauchy inequatlity, but I fail :(
AI: Here is a way:
Note that $$0 \leq X \leq 1$$
$$\Rightarrow 0\leq X^2 \leq X \leq 1$$
Thus $E[X^2] \leq E[X]$
Now thus $$Var[X] \leq E[X](1-E[X]) $$
But $0 \leq E[X] \leq 1$, the maxima of $f(x)=x(1-x) \quad x\in [0,1]$ is $1/4$ at $x=.5$ (Hint: AMGM).
QED |
H: Does $X$ have countable network if it has countable extent?
Let $X$ have a $\sigma$-discrete network and have countable extent.
Does $X$ have countable network?
A family $\mathcal N$ of subsets of a topological space $X$ is a network for $X$ if for every point $x\in X$ and any neighbourhood $U$ of $x$ there exists an $M \in \mathcal N$ such that $x\in M \subset U$.
A network is like a base, except that its members need not be open sets.
Another question:
How could we see that every Moore space has a $\sigma$-discrete network?
Thanks for your help.
AI: Let $\mathscr{N}=\bigcup_{n\in\omega}\mathscr{N}_n$ be a $\sigma$-discrete network for $X$, where each $\mathscr{N}_n$ is discrete. Fix $n\in\omega$, and for each $N\in\mathscr{N}_n$ fix $x_N\in N$. Then $\{x_N:N\in\mathscr{N}\}$ is a closed, discrete subset of $X$, so it’s countable (since $X$ has countable extent), and therefore $\mathscr{N}_n$ is countable. It follows immediately that $\mathscr{N}$ is countable.
I’ll have to come back to the other question later. |
H: Largest eigenvalue of a graph
I have $\lambda_1$ the largest eigenvalue of a graph, with $x = (x_v)_{v \in V(G)}$ the corresponding eigenvector.
$x_u$ is the entry of $x$ with maximum absolute value.
I don't understand why I then have:
$$\lambda_1 x_u = \sum_{v \in N(u)} x_v$$
$N(u)$ being the neighborhood of $u$.
AI: Let $A$ be the adjacency matrix of $G$, that is $a_{uv} = 1 \iff v \in N(u)$ and $a_{uv} = 0$ otherwise. For any eigenvalue $\lambda$ and corresponding eigenvector $x$ we have $Ax = \lambda x$, that is for any $u \in V(G)$:
\begin{align*}
\lambda x_u &= (Ax)_u\\
&= \sum_{v\in V(G)} a_{uv} x_v\\
&= \sum_{v\in N(u)} x_v.
\end{align*} |
H: Easiest example of rearrangement of infinite leading to different sums
I am reading the section on the rearrangement of infinite series in Ok, E. A. (2007). Real Analysis with Economic Applications. Princeton University Press.
As an example, the author shows that
is a rearrangement of the sequence
\begin{align} \frac{(-1)^{m+1}}{m} \end{align}
and that the infinite sum of these two sequences must be different.
My question is : what is to you the easiest and most intuitive example of such infinite series having different values for different arrangements of the terms? Ideally, I hope to find something as intuitive as the illustration that some infinite series do not have limits through $\sum_\infty (-1)^i$.
I found another example in http://www.math.ku.edu/~lerner/m500f09/Rearrangements.pdf but it is still too abstract to feed my intuition...
AI: $$1/2-1/3+1/4-1/5+1/6-1/7+\cdots=(1/2-1/3)+(1/4-1/5)+(1/6-1/7)+\cdots$$ is obviously positive. The rearrangement $$1/2-1/3-1/5+1/4-1/7-1/9+1/6-1/11-1/13+\cdots$$ is clearly negative; just group it as $$(1/2-1/3-1/5)+(1/4-1/7-1/9)+(1/6-1/11-1/13)+\cdots$$ which is $$(1/2-8/15)+(1/4-16/63)+(1/6-24/143)+\cdots\lt(1/2-8/16)+(1/4-16/64)+(1/6-24/144)+\cdots=0$$ |
H: Pole set of rational function on $V(WZ-XY)$
Let $V = V(WZ - XY)\subset \mathbb{A}(k)^4$ (k is algebraically closed). This is an irreducible algebraic set so the coordinate ring is an integral domain which allows us to form a field of fractions, $k(V).$ Let $\overline{W}, \overline{X}$ denote the image of $W$ and $X$ in the coordinate ring. Let $f=\dfrac{\overline{W}}{\overline{X}}\in k(V).$ I want to find the pole set of $f.$ I have a feeling I am over complicating this. Essentially I should just view $f$ as the function $W/X$ restricted to $V.$ I know that points on $V$ where $X=0$ and $W\neq 0$ are in the pole set, and this occurs when $Z=0.$ But what happens if both $W$ and $X$ are $0$ ?
AI: Indeed, the pole set of $f=\frac{\overline{W}}{\overline{X}}$ is precisely the set
$$
S(f)=\{(w,0,y,0) \in \Bbb{A}^4 \; \vert \; w,y \in k\}=V(X,Z)
$$
It is clear that $f$ is defined at every point not in $S(f)$, because
$$
f=\frac{\overline{W}}{\overline{X}}=\frac{\overline{Y}}{\overline{Z}}
$$
and from this you see that $f$ is defined at $(w,x,y,z)$ if either $x \neq 0$ or $z \neq 0$.
In order to see that $f$ is not defined anywhere else, let $P=(a,0,b,0) \in \Bbb{A}^4$, and assume you could write
$$
\frac{\overline{W}}{\overline{X}}=\frac{\overline{F}}{\overline{G}}
$$
with $F,G \in k[X,Y,Z,W]$ and $G(P) \neq 0$.
This implies that $G \cdot W-F \cdot X \in (WZ-XY)$, which means that $G \cdot W-F \cdot X=H \cdot (WZ-XY)$ for some $H \in k[X,Y,Z,W]$. Now you get
$$
X \cdot (F-HY)=W \cdot (G-HZ)
$$
from which it follows that $X$ must divide $G-HZ$, i.e. $G-HZ \in (X)$. But then evaluating at $P=(a,0,b,0)$ yields
$$
G(P)=0
$$
which contradicts our choice of $G$. |
H: Which accompanying text do you suggest on these topics of finite fields?
Please take a look at pages 80-85 (Section 2.6 Finite fields) of this handbook, Handbook of Applied Cryptography.
I am trying to learn the mathematics enumerated in these pages. I do not need the algorithms. I will also skip any proof of theorems. The learning is for background reading as preparation for the field of cryptography that use them.
While the handbook is an excellent one and authoritative, it is somewhat concise for my purpose. It states the theories and facts without explaining the intuition behind one. Also, it does not contain many examples. This is quite natural, this is a handbook, not a textbook. I am looking for one or two texts or lecture notes dealing more or less in these areas but with elaborations and lots of examples to facilitate self-learning.
Page 86 of the same handbook suggests four references ([646], [764], [830], [841]) for further reading. I have taken a look at all of these except the first one. But these are too elaborate for the time I can spend on these topics.
I have done a thorough search over the Internet, without much success. Perhaps lack of knowledge on my part
is also working as an impediment.
Your suggestions will be appreciated.
AI: Virtually any book on algebra will contain more, but since you are especially interested in coding theory, I can recommend this coding theory book which is a little less terse (see pages 111-115).
Here's another link do a document with worked out examples. It looks like it is written for engineers (but I'm not sure if that's a plus or minus for you).
Wolfram has a short article with a worked example. In this case, Wikipedia's article might be fairly helpful, because it appears to contain a lot of examples! Be on your guard for typos, of course.
Searching "finite field pdf" turned up lots of relevant hits, including this slideshow on the topic.
The main theory of finite fields is very simple. A finite field has prime characteristic $p$ for some $p$, and then it must be of order $p^k$ for some positive integer $k$. There are finite fields of all possible orders $p^k$ for each combination of prime $p$ and positive integer $k$.
Finally, the group of nonzero elements in the field must be cyclic, and so there are things called primitive elements which generate that group.
Constructing the fields of prime order is easy: they are exactly $\Bbb F_p=\Bbb Z/p\Bbb Z$. The link I provided above furnishes the last task by giving some worked out examples of how the rest of the fields can be constructed with a quotient $\Bbb F_{p^k}=\Bbb F_p[x]/(g(x))$ where $g(x)$ is an irreducible polynomial over $\Bbb F_p$ of degree $k$.
One more thing that comes to mind for coding theory is the "Freshman's Dream Theorem" that for any polynomial $g(x)$ over $\Bbb F_{p^k}$, $g(x^p)=(g(x))^p$ which sometimes comes in handy. For example, $(x+2)^3=x^3+2^3=x+2$ over the field $\Bbb F_3$.
There are probably some finer details about fields that I'm not mentioning, but the above facts were sufficient for a few years of graduate study on coding theory in my case, anyhow. |
H: Set Unions with repeating elements
the union of {1,2,3,4} and {2,3,4,5} is {1,2,3,4,5}
but the union of {2,2,2,3} and {2,3,3,5} is {2,3,5}
Is there another Union concept which takes account of the number of occurences of repeating elements. i.e.
the SOME_TERM of {2,2,2,3} and {2,3,3,5} is {2,2,2,3,3,5}. basically the intersection of the set would only account for one of each repeating term in both sets.
AI: There isn't such a term in common use, to the best of my knowledge, but the concept is well known. (For example, the least common multiple of $2\times 2\times 2\times 3$ and $2\times 3\times 3\times 5$ is exactly $2\times 2\times 2\times 3\times 3\times 5$!)
The reason such a term isn't in common use is that sets don't take account of multiplicity anyway: that is, {2, 2, 3} = {2, 3} = {2, 2, 2, 1+1, 3, 3, 3, 2, 3}. Sets only care about what elements they contain. |
H: Choosing disjoint representatives from two sets of squares
There are 3 red axis-aligned interior-disjoint squares.
There are 3 blue axis-aligned interior-disjoint squares.
Is it always possible to find a pair of 1 red square and 1 blue square, such that they are interior-disjoint?
I tried many combinations, and it seems that it's always possible, but could not find a proof.
AI: I will prove your proposition by a reductio ad absurdum. It turned out to be a rather lengthy argument, mainly due to problems of language and notation, but here it goes:
Let your red squares be given by $$R^i = \left[R_l^i,R_r^i\right]\times \left[R_u^i,R_d^i\right]\subseteq\mathbb{R}^2$$ and your blue squares by $$B^i=\left[B_l^i,B_r^i\right]\times \left[B_u^i,B_d^i\right]\subseteq\mathbb{R}^2$$
and assume that the red squares as well as the blue squares are pairwise interior disjoint. Also assume that no blue square is interior disjoint with any red square.
If $R^1$ and $R^2$ are to be interior disjoint then at least one of the following must be true:
$R_r^1 \leq R_l^2$
$R_l^1 \geq R_r^2$
$R_u^1 \leq R_d^2$
$R_d^1 \geq R_u^2$
By rotating our plane if needed we may assume without loss of generality that $R_r^1 \leq R_l^2$.
In order for $B^i$ not to be interior disjoint with $R^1$, we now have $B_l^i\leq R^1_r$. In order for $B^i$ not to be interior disjoint with $R^2$ we have $B_r^i\geq R^2_l$. Thus $[R^1_r,R^2_l]\subseteq [B^i_l,B^i_r]$ for $i=1,2,3$. This means that the blue squares must lie above each other (i.e. $B^1_u\leq B^2_d$, $B^2_u \leq B^3_d$ for the appropriate numbering).
The same argument now shows that the red squares must all lie beside each other (i.e. $R^1_r\leq R^2_l$, $R^2_r \leq R^3_l$ for the appropriate numbering).
Now the distance between the left-most and the right-most red squares is at least the side-length of the middle red square. Hence the side-length of any blue square must exceed the side-length of this particular red square if it is to overlap with the outermost red squares. By the same argument, the side-length of any red square must exceed the side-length of the middle blue square. Thus by transitivity, the side-length of the middle blue square must exceed itself, which is an absurdity.
Hence we conclude that our assumption, that no blue square is interior disjoint from any red square, must be impossible. |
H: Proof of the 2 pointer method for finding a linked list loop
The linked list with a loop problem is classical - "how do you detect that a linked list has a loop" ? The "creative" solution to this is to use 2 pointers, one moving at a speed of 1 and the second one at the speed of 2. If the two pointers meet then there is a loop.
How do you prove this mathematically and more importantly how do you generalize? For example, will having the first pointer at a speed of 2 and the second at the speed of 3 still work?
For those without a background in CS, a linked list is a collection of nodes, each node has a link to the next one. So you can only go forward.
AI: If there is a loop (of $n$ nodes), then once a pointer has entered the loop it will remain there forever; so we can move forward in time until both pointers are in the loop. From here on the pointers can be represented by integers modulo $n$ with initial values $a$ and $b$. The condition for them to meet after $t$ steps is then
$a + t \equiv b + 2t \text{ mod }n$
which has solution $t = a - b \text{ mod }n$.
This will work so long as the difference between the speeds shares no prime factors with $n$. |
H: A question of the property of positive definite matrix
Let $A$ be a positive definite $n$ by $n$ matrix.
Let $c_1$ be the smallest eigenvalue of $A$ and $c_2$ be the largest eigenvalue of $A$.
Then how can I show that
$$ | \langle z, Aw \rangle |^2 \le \langle z, Az \rangle \langle w, Aw \rangle$$
and
$$ c_1 |z|^2 \leq \langle z, Az \rangle \leq c_2 |z|^2$$
for any $z, w \in \mathbb C^n$?
Here $\langle x, y \rangle = \sum_{j=1}^n x_j \bar y_j$.
AI: As $A$ is positive definite so $\exists P$ an unitary matrix such that $P^*AP=D=$diag $(\lambda_1,\lambda_2,\dots,\lambda_n)$,$\lambda_i$ are the eigen values of $A$ and $\lambda_i>0$
Let $P^*z=P^{-1}z=y=\begin{pmatrix}
y_1&y_2&y_3&\dots &y_n\\
\end{pmatrix}^t,P^*w=P^{-1}w=x=\begin{pmatrix}
x_1&x_2&x_3&\dots &x_n\\
\end{pmatrix}^t$(As $PP^*=I\Rightarrow P^{-1}=P^*$)
Then we have,
$<z,Aw>=z^*Aw=z^*PP^*APP^*w=(P^*z)D(P^*w)=y^*Dx$
It is easy to check that $y^*Dx=\sum_{i=1}^{n}\bar{y_i}\lambda_ix_i$
Now by applying Cauchy Schwarz to the above sum we have,
$\displaystyle (\sum_{i=1}^{n}\bar{y_i}\lambda_ix_i)^2\le (\sum_{i=1}^{n}(\bar{y_i}\sqrt{\lambda_i})^2)(\sum_{i=1}^{n}({x_i}\sqrt{\lambda_i})^2)\le(\sum_{i=1}^{n}(\bar{y_i}\lambda_iy_i)(\sum_{i=1}^{n}(\bar{x_i}\lambda_ix_i)=<y,Dy><x,Dx>=<z,Az><w,Aw>$(As $PP^*=I$)
And the 2nd one,
$<w,Aw>=\sum_{i=1}^{n}\bar{x_i}\lambda_ix_i=\sum_{i=1}^{n}\lambda_i|x_i|^2\le c_2\sum_{i=1}^{n}|x_i|^2=c_2<x,x>=c_2<P^*w,P^*w>=c_2<w,w>=c_2\sum_{i=1}^{n}|w_i|^2$(As $PP^*=I$)
Similarly we also have,
$<w,Aw>=\sum_{i=1}^{n}\bar{x_i}\lambda_ix_i=\sum_{i=1}^{n}\lambda_i|x_i|^2\ge c_1\sum_{i=1}^{n}|x_i|^2=c_1<x,x>=c_1<P^*w,P^*w>=c_1<w,w>=c_1\sum_{i=1}^{n}|w_i|^2$(As $PP^*=I$) |
H: Eigenvalues of power of matrices
How come if $\lambda$ is an eigenvalue of $A$, then $\lambda^k$ is an eigenvalue of $A^k$?
And is its multiplicity necessarily the same?
AI: Let $v$ be an eigenvector corresponding to $\lambda$. Then, by induction
$$ A^k v = A^{k-1}(Av) = \lambda A^{k-1}v = \lambda^k v $$
hence $A^k$ has an eigenvalue $\lambda^k$. For the multiplicity note that $A = \binom{1\; 0}{0\; -1}$ has $\lambda = 1$ with multiplicity 1, but $A^2 = {\rm Id}$ has $\lambda^2 = 1$ with multiplicity 2. |
H: Understanding a limit in standard Borel probability space
There is an exercise in lectures:
Let $(X, \Sigma, \mu)$ be a standard Borel probability space and $(B_n)_{n \in \mathbb{N}} \subset \Sigma$.
Show that
\begin{align*}
μ(\cup_{n \in \mathbb{N}} B_n) = \lim_{N \to \infty} μ(\cup_{n
\leqslant N} B_n)
\end{align*}
I'm not sure where to start. My problem is that I'm confused about what the limit even means here. Would appreciate a hint or pointer on how to go about evaluating such limits ...
AI: Hint: The limit here is the usual limit of real sequences, note that for each $N$ $\mu(\bigcup_{n\le N} B_n) =: a_N$ is a real number in $[0,1]$ and you are asked to prove, that the sequence $(a_N)$ converges to $\mu(\bigcup_n B_n)$. To do so, one possibility is to write $\bigcup B_n$ as a disjoint union $\bigcup B_n = \biguplus C_n$ such that $\bigcup_{n\le N} B_n = \biguplus_{n\le N} C_n$ for each $N$? Can you guess how to find such $C_n$? Then you have
$$ a_N = \mu\left(\bigcup_{n\le N} B_n\right) = \mu\left(\biguplus_{n\le N} C_n\right) = \sum_{n\le N} \mu(C_n) $$
and
$$ \mu\left(\bigcup_{n\in\mathbb N} B_n\right) = \mu\left(\biguplus_{n\in\mathbb N} c_n\right) = \sum_{n\in \mathbb N} \mu(C_n) $$
Can you prove convergence from here? |
H: Determinant of Matrix of Matrices
My question concerns a situation where you are looking for a determinant of a matrix which is in itself composed of other matrices (in my example, all the inner matrices are square and of equal dimensions).
Say we have matrix $A_{cl}$:
$$
A_{cl}=
\left[\begin{matrix}
0 & I\\
-kL_e & -kL_e
\end{matrix}\right]
$$
where $L$ is a laplacian matrix of a graph (meaning it is symmetric and positive definite in this example because the graph is a spanning tree).
I presume the following:
$L$ is $n$ x $n$, therefore $A_{cl}$ is $2n$ x $2n$.
I see the following development, which I don't understand:
$$
det(\lambda I-A_{cl}) = det(\lambda^2I + (\lambda+1)kL_e)) = 0
$$
Since $\lambda = -1$ does not satisfy this equation, it is not an eigenvalue of $A_{cl}$. The eigenvalues of $A_{cl}$ thus satisfy
$$
det(\lambda^2/(\lambda+1)I + kL_e) = 0
$$
Denoting the eigenvalues of $-kL_e$ by $\mu$, one has that, for each $i$,
$$
\mu_i = \lambda^2/(\lambda+1)
$$
and hence
$$
\lambda_i = \frac12(\mu_i+\sqrt{\mu_i^2+4\mu_i})
$$
My beef with this development is mostly in the first sentence of it, where they say:
$$
det(\lambda I-A_{cl}) = det(\lambda^2I + (\lambda+1)kL_e)) = 0
$$
This is a determinant of a matrix of matrices, and they treat it like it is a 2x2 matrix determinant (and keep the det() operation after, which is even more confusing). If anybody could explain the mechanics behind this first part of the development I would be very grateful.
Thank you
AI: Never mind, I googled determinant of Block Matrix, which gave me:
http://en.wikipedia.org/wiki/Determinant#Block_matrices
So it turns out when you have a block matrix, assuming the dimensions agree for matrix multiplication rules, you can actually treat it as a regular matrix. In my case it looked like a 2 by 2 matrix, therefore that development was perfectly legal. |
H: Showing symmetry involving a matrix and its transposed matrix
I'd appreciate if someone could find a better title for this question, for I'm short of ideas right now.
Given a matrix $A \in R^{n,n}$, show that
$$
\frac{1}{2}(A + A^t)
$$
is symmetric.
I see that it's symmetric and it seems obvious, but I don't really know how to show that in particular.
AI: $(B+B^t)^t=B^t+(B^t)^t=B^t+B$
So $B+B^t$ is symmetric $\forall B\in R^{n\times n}$.
Take $B=A/2$ and you get the desired result. |
H: Prove that det(BA) = 0 under some circumstances
How to prove that:
$ det(BA) = 0 $ Assuming:
$ m < n, A \in M_{mxn}, B\in M_{nxm}$?
AI: rank$(BA)\le $min(rank(A),rank (B))$= m<n$
Now as $AB$ is a $n\times n$ matrix and as its rank is less than $n$ its rows(columns) are not independent vectors implying $det(BA)=0$
Proof of the fact: rank$(BA)\le $min(rank(A),rank (B))$
$\mbox{rank } A\le m$ and $\text{rank }B\le m$(Obvious fact as rank (A)=dimension of the columnspace of A=dimension of the row space of A)
Let $E_{n\times n}B$ be the row echelon form of $B$ and let $AE_{m\times m} $ be the column echelon form of $A$.($E_{n\times n} ,E_{m\times m}$ are elementary matrices)
We know $rank (BA)=rank(E_{n\times n}BAE_{m\times m} )$
But $E_{n\times n}BAE_{m\times m} =\begin{pmatrix}
L&0\\
0&0\\
\end{pmatrix}$
where $L$ is an $k\times l$ matrix with $k\le rank (B),l\le rank(A)$.
so rank $(E_{n\times n}BAE_{m\times m} )$=rank $\begin{pmatrix}
L&0\\
0&0\\
\end{pmatrix}\le \min\{k,l\}\le \min\{\mbox{rank } A,\mbox{rank }B\}$ |
H: given point (2,6) and a line passes through point (3,0)
The question is: does the distance between the point $(2,6)$ to the that line could be $5$?
is there a solution to the problem without computing?
i would glad to know.
thanks.
AI: since the line $y=0$ has distance 6 from the point $(2,6)$ and there is a line with distance 0 (the one through the two points) and the distance is a continuous function, yes: there are (two) lines with distance 5. If you want to know which lines, that's another matter. |
H: Probability to select all 3 male mouses from 10 selected at random
In a cage there are 100 mouses from which 3 are male.
Compute the probability of selecting all 3 males from a group of 10 mouses selected at random.
I have this intuition:
$$
P(male)=0.03
$$
and number of all possibilities of selecting all 3 males mouses out of 10:
$$
C(10, 3) = \frac{10!}{3! \times 7!}
$$
Resulting:
$$
P(all\ 3\ males\ from\ 10) = 0.03^3 \times 0.97^7 \times C(10,3)
$$
I feel there is something totally wrong with it but I have no idea what.
AI: How many ways can you choose a group of 10 mice without restrictions?
$^{100}C_{10}$
How many ways can you choose a group of 10 mice including all three male mice?
$^{3}C_{3}\,^{97}C_{7}=\,^{97}C_{7}$ i.e. choose all three males and seven other mice.
Hence the probabolity is $\frac{^{97}C_{7}}{^{100}C_{10}}$
Your calculation is more suited to independent events, where we'd put each mouse back after choosing it and then just see the genders of 10 mice chosen in this way. |
H: Boolean simplification, 5 variables
I'm currently learning for my maths exam, and in the part about boolean algebra I came across an exercise that I can't seem to solve. I probably only need the first few steps to get started.
$$ (xyz + uv)(x+\overline{y}+\overline{z}+uv) $$
Usually, if I get into trouble, I can fall back to a truth table or VK-diagram, but that's just too much work for 5 variables.
Thanks in advance!
AI: Multiply the terms in the two brackets. You get:
$$ xyz + 0 + 0 + xyzuv + xuv + \overline{y}uv + \overline{z}uv + uv$$
$$ xyz(1+uv) + uv(1+x+\overline{y}+\overline{z})$$
$$ xyz + uv $$
NOTE:
$1 + x = 1$
$1.x = x$
$x.x = x$
$x.\overline{x} = 0$ |
H: Trace of a diagonalized matrix
Why do I have: $Tr(SDS^{-1})=Tr(D)$?
AI: Note that for any matrices $A$ and $B$ we have $\def\tr{\mathop{\rm Tr}}\tr(AB) = \tr(BA)$. To see this, one can argue as follows:
\begin{align*}
\tr(AB) &= \sum_i (AB)_{ii}\\
&= \sum_i \sum_j A_{ij}B_{ji}\\
&=\sum_j \sum_i B_{ji}A_{ij}\\
&= \sum_j (BA)_{jj}\\
&= \tr(BA)
\end{align*}
Hence $$ \tr(S^{-1}DS) = \tr(DSS^{-1}) = \tr(D). $$ |
H: What is a "maximal" object?
The idea of a "maximal" graph was introduced in a proof for Ore's Condition.
I didn't quite get the idea, and I would like more detailed explanations.
The theorem and proof are as follows.
Suppose G is a graph with v vertices ($v \ge 3$), and for every pair of non adjacent vertices $x$ and $y$, $\deg(x)+\deg(y) \ge v$ then G is hamiltonian.
Proof: Suppose the theorem is not true. We can assume that all airs of nonadjacent vertices satisfy the given degree condition, and that if p and q are nonadjacent vertices then the graph formed by adding edge $pq$, denoted $G+pq$, will be hamiltonian (if not, then join $pq$ and use the new graph instead of $G$). We would say G is maximal for the condition.
Source: "Introduction to Combinatorics" P.167
From the "Suppose the theorem is not true", I expect the proof to say that the inequality is satisfied in $G$ but $G$ is not hamiltonian. How does this relate to $G+pq$ being hamiltonian?
Also, does the sentence inside the parentheses mean that I should keep joining nonadjacent vertices until I get a graph that is hamiltonian?
If this is the case, how can I be guaranteed that I can obtain a hamiltonian graph by this process?
AI: You are right in your first observation: When we say, suppose the theorem is wrong, we suppose that we have a graph $G$ fulfilling the degree condition without being Hamiltonian. We now add some assumptions, which can be made without loss (as we will show): Note that if we add an edge $pq$ to $G$ the degree condition given will be fulfilled for $G + pq$ also. So we will add edges to $G$ until we cannot add another edge to $G$ without making the new graph Hamiltonian. The graph we have constructed then has the following properties:
$G$ fulfills the degree condition given
$G$ is not Hamiltonian
$G+pq$ is Hamiltonian for any edge $pq$ not in $G$
A graph with the third property is called "maximal non-Hamiltonian", because it cannot be made "bigger" without getting Hamiltonian. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.