text
stringlengths 83
79.5k
|
---|
H: $H$ and $K$ are supposed to be closed in $X$, Why?
Let $X$ be a topological space and $A$ a closed subspace of $X$. Let $H$ and $K$ be closed in $A$. Now $H$ and $K$ are supposed to be closed in $X$, Why?
AI: The complement of $H$ in $A$ is open in $A$, so is the intersection of some open set $U$ in $X$ with $A$. The complement of $H$ in $X$ is then the union of $U$ and the complement of $A$ in $X$, both of which are open.
Exactly the same argument works for $K$ - there seems to be no reason to state the problem with two subsets, so I assume this comes from some larger context. |
H: proof reading and a little question and refine english about college-algebra
Welcome to edit my post to revise any mistakes, both English and Math, thanks.
Theorem 12
polyIC=integer polynomial
polyRC=rational polynomial
Suppose $f(x)=a_nx^n+a_{n-1}x^{n-1}+\text{...}+a_0$ is a polyIC, and $\frac{r}{s}$ is a rational root, where $(r,s)=1$.
Then we must have $s\left|a_n\right.,r\left|a_0\right.$. Specially, If the leading coefficient of $f(x)$ satisfies $a_n=1$,
then the rational roots of $f(x)$ are all integer roots, and they are factors of $a_0$.
Proof$\unicode{ff1a}$
Because $\frac{r}{s}$ is an rational root of $f(x)$. On rational number field $\left.\left(x-\frac{r}{s}\right)\right|f(x)$.
Thus $[\text{Mark1}]$ $(\text{sx}-r)|f(x)$. For $(r,s)=1$, $s x-r$ is an primitive polynomial.
Conclude from above, $f(x)=(s x-r)\left(b_{n-1}x^{n-1}+\text{...}+b_0\right)$,where $b_{n-1},\text{...},b_0$ are all integers.
Compare the coefficients of two sides, that is $a_n=s b_{n-1},a_0=-r b_0$.
Hence $s\left|a_n\right.,r\left|a_0\right.$.
one little question
Does $[\text{Mark1}]$ so obvious?
$\left.\left(x-\frac{r}{s}\right)\right|f(x)$ $\Longrightarrow $ $s|f(x)$
of course, this is obvious in this example: f below, has one rational root-4/5$\unicode{ff0c}$so 5(x-4/5)$|$f(x)
$\pmb{f=25\left(x-\frac{4}{5}\right)(x-2)\left(x+\frac{4}{5}\right)\text{//}\text{Expand}}$
$32-16 x-50 x^2+25 x^3$
$\pmb{\text{Solve}[f==0,x]}$
$\left\{\left\{x\to -\frac{4}{5}\right\},\left\{x\to \frac{4}{5}\right\},\{x\to 2\}\right\}$
$\pmb{\text{Factor}[f]}$
$(-2+x) (-4+5 x) (4+5 x)$
$\pmb{5(x-4/5)\text{//}\text{Expand}}$
$-4+5 x$
AI: When you divide a polynomial $f$ with rational coefficients by $x-t$ with $t$ rational, you get a quotient polynomial with rational coefficients and a rational number as remainder; $$f(x)=(x-t)q(x)+r$$ Now let $x=t$, and you get $f(t)=r$. So if $f(t)=0$, then $r=0$, so $f(x)=(x-t)q(x)$, so $x-t$ divides $f(x)$. |
H: Finding maximum volume
A box has corner (0,0,0) and all edges parallel to the axes. If the opposite corner (x,y,z) is on the plane $$ 3x+2y+z=1 $$, what position gives maximum volume? Show first that the problem maximizes
$$
xy-3x^2y-2xy^2
$$
Can somebody please explain how this should be solved?
AI: From the description of the problem, we have each edge parallel to an axis, one corner at $(0,0,0)$ and one corner at $(x,y,z)$. This means that we can think about the sides of the box each having length $x$, $y$, and $z$. This gives us an area function $A(x,y,z)=xyz$.
We know that the corner $(x,y,z)$ is on the plane defined by $3x+2y+z=1$. If we solve this equation for $z$, we have $z=1-3x-2y$. We can the substitute this into our area function and have $$A(x,y,z)=xy(1-3x-2y)=xy-3x^2y-2xy^2.$$
Now, we need to maximize this function. To get started, we need to take partial derivatives of $A$ with respect to both $x$ and $y$. Please ask if you need help with this part. |
H: If $\lim\limits_{x\rightarrow\infty} (f'(x)+f(x)) =L<\infty$, does $\lim\limits_{x\rightarrow\infty} f(x) $ exist?
I want to prove or disprove this problem:
If there exist $\lim\limits_{x\rightarrow \infty} (f'(x)+f(x))=L<\infty$ then $\lim\limits_{x\rightarrow\infty} f(x) =L$.
When I assume problem below:
If there exist $\lim\limits_{x\rightarrow\infty} (f'(x)+f(x)) =L<\infty$, There exists $\lim\limits_{x\rightarrow\infty} f(x)$?
I can use mean-value theorem to show that.
So my question is:
If $\lim\limits_{x\rightarrow\infty} (f'(x)+f(x))=L<\infty$, does $\lim\limits_{x\rightarrow\infty} f(x)$ exist?
AI: Consider the function $$g(x)=e^{x} f(x).$$ Then $$Dg(x)=e^{x}f(x)+e^{x}Df(x)=e^{x} \left( f(x)+Df(x) \right).$$
Now,
$$
\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty} \frac{g(x)}{e^x} = \lim_{x \to +\infty} \frac{Dg(x)}{e^x} = \lim_{x \to +\infty} Df(x)+f(x)
$$
by De l'Hospital's theorem.
N.B. I think this exercise was solved by G. Hardy in one of his books. |
H: An interesting problem using Pigeonhole principle
I saw this problem: Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$.
Show that There exist $a,b \in A$ with $a \neq b$ and $a$ and $b$ is coprime.
I proved this one very easily by using pigeon hole principle on partition on $\{1,2\},\{3,4\},\dots,\{2n-1,2n\}$.
My question is How can I prove or disprove that:
Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$.
Show that There exist $a,b \in A$ with $a \neq b$ and $a|b$.
I can't make suitable partition. Is this true?
AI: Any number from the set $A$ is of the form $2^{k}l$ where $k\ge 0,0\le l\le (2n-1)$ and $l$ is odd.
Number of odd numbers $l\le (2n-1) $ is $n$.
Now if we select $(n+1)$ numbers from the set $A$ then there must be two numbers(among the selected numbers) with the same $l$.
That is, we must get $a,b$ with $a=2^{k_1}l$ and $b=2^{k_2}l$ now as $a\ne b$ so $k_1\ne k_2$.Now if $k_1>k_2$ then $b|a$ else $a|b$.
This completes the proof. |
H: Problem with a definition of $\succsim$-maximal element and $\succsim$-maximum
I found in a book the following definition of $\succsim$-maximal element and $\succsim$-maximum, but I really don't see the difference between what is written in the book (I know the difference between those two concepts). Am I wrong?
Let $(X,\succsim )$ be a preordered set and $∅ = Y ⊆ X$. An element $x$ of $Y$ is said to be $\succsim$-maximal in $Y$ if there is no $y \in Y$ with $y \succ x$. If $x \succsim y $ for all $y \in Y$ , then $x$ is called the $\succsim$-maximum of $Y$.
AI: The difference in the negation:
Maximal element is one that there does not exist anyone strictly larger.
Maximum element is one that every element is smaller or equivalent to.
We have $\lnot\exists y\varphi(y,x)$ and $\forall y\varphi(x,y)$, where $\varphi(x,y)$ states $y\precsim x$. |
H: What is the difference between multiplicative group of integers modulo n and a Galois Field
What is the difference between multiplicative group of integers modulo n and a Galois Field?
Is $\mathbb{Z}^*_p$ with p prime the same as $GF(p)$?
Or is it the same as $\mathbb{Z}/n\mathbb{Z}$?
Sorry for the short and simple question, but this bit of notation is confusing me, so a clear different should help me alot!
Thanks!
AI: If $p$ is prime then the ring $\mathbb Z/p\mathbb Z$ is in fact a field and is "the" field with $p$ elements (as there is only one up to isomorphism). Note however that for prime powers $q=p^n$ with $n>1$, the ring $\mathbb Z/q\mathbb Z$ is not a field, hence is different from the field $GF(q)$ (or $\mathbb F_q$, depending on author). |
H: Stupid question about $1 - \frac{1}{2}-\frac{1}{4}+\frac{1}{3}- \frac{1}{6}-\frac{1}{8}+\frac{1}{5}\dots$
I have
$$1 - \frac{1}{2}-\frac{1}{4}+\frac{1}{3}- \frac{1}{6}-\frac{1}{8}+\frac{1}{5}\dots$$
Partial sum $S_{3n}$ of the above is:
$$(1 - \frac{1}{2}-\frac{1}{4})+(\frac{1}{3}- \frac{1}{6}-\frac{1}{8})+(\frac{1}{5}-\dots$$
But what is $S_{3n-1}$ and $S_{3n-2}$ ?
AI: By guessing the pattern hidden in the ellipsis, it seems that you consider the series
$\sum_{k=1}^\infty a_k $
where $a_{3n-2}=\frac1{2n-1}$, $a_{3n-1}=-\frac1{4n-2}$, $a_{3n}=-\frac1{4n}$ for $n\ge 1$.
Thus $a_{3n-2}+a_{3n-1}+a_{3n}=\frac1{2n-1}-\frac1{4n-2}-\frac1{4n}=\frac1{4n(2n-1)}$ and we have the partial sums
$$S_{3n}=\sum_{k=1}^{3n} a_k =\sum_{k=1}^n\frac1{4n(2n-1)}$$
$$S_{3n-1}=\sum_{k=1}^{3n-1} a_k =\sum_{k=1}^n\frac1{4n(2n-1)}+\frac1{4n}$$
$$S_{3n-2}=\sum_{k=1}^{3n-2} a_k =\sum_{k=1}^n\frac1{4n(2n-1)}+\frac1{4n}+\frac1{4n-2}$$ |
H: A basic probability problem on conditional expectation
A coin having probability $p$ of coming up heads is successively flipped until two of the most recent flips are heads. Let $N$ denote the number of flips. How to find $E[N]$.
AI: Assume that $p\ne 0$. At any time during the tossing, we are in state $A$ if we have not just tossed a head. We are in state $B$ if we have just tossed a head, and we have not yet reached our goal.
Let $a$ be the required expectation. Note that $a$ is also the expected number of additional tosses given that we are in state $A$. Let $b$ be the expected number of additional tosses given that we are in state $B$. We assume that $a$ and $b$ are finite. This is not difficult to prove, but we leave the sketch of a proof to a remark at the end.
If we are in state $A$, then with probability $p$ we enter state $B$, at the cost of $1$ toss, and with probability $1-p$ we stay in state $A$, again at the cost of $1$ toss. Thus
$$a=p(b+1) +(1-p)(a+1)=1+pb+(1-p)a.$$
A similar argument shows that
$$b=p(1)+(1-p)(a+1)=1+(1-p)a.$$
Solve the above system of linear equations. We get $a=\frac{1+p}{p^2}$.
Remark: To show that $a$ is finite, note that the probability of head at tosses $2k-1$ and $2k$ is $p^2$. So the probability that the number of tosses used is $2n$ is $\le (1-p^2)^n$. It follows that $a\le \sum_{1}^\infty 2n(1-p^2)^n$. By the Ratio Test, since $1-p^2\lt 1$, the series converges, so $a$ is finite.
One could calculate $a$ by summing appropriate series. However, the conditional expectation argument of the answer is simpler. |
H: Proving a biconditional statement with an or
I want to prove a theorem in geometry of the form $p \iff q \vee r$. My plan is to prove:
$q \implies p$ as well as $r \implies p$
$p \text{ and } \lnot q \implies r$
Can I get someone to verify that I haven't sidetracked in my logic?
AI: Your planned approach is sound. |
H: Probability that two points are more distant than a third equidistant point
Say you have three points $x,y,z \in \mathbf{R}^n$ with standard Euclidean distance $d$ and $d(x,y) = d(y,z)$. Then what's the probability that $d(x,z) > d(x,y)$ for random $x,z$? For convenience, $y=\mathbf{0}$ and $d(x,y) = d(y,z) = 1$.
For $n=3$, I'm guessing it's
$$
p(d(x,z) > 1) = p(d(x,z)>1|x=(1,0,0))
\\
=1-A(z,1) / 4\pi
\\
\approx 3/4
$$
where $A(z,r)$ is the area for the set of points $\{z|d(z,(1,0,0))<r,d(z,\mathbf{0})=1\}$. I don't know how that would be calculated but I assume $\pi$ is a bad approximation for it.
For $n$ in general? Is there a closed form for this probability?
AI: For $n=1$, we find $p_1=\frac12$.
For $n=2$, we find $p_2=\frac23$.
For general $n$, we find
$$p_n=1-\frac{\int_0^{\frac\pi3}(\sin \phi)^{n-2} \,\mathrm d\phi}{\int_0^{\pi}(\sin \phi)^{n-2} \,\mathrm d\phi}.$$
Especially, the value for $n=3$ is exactly
$$p_3=1-\frac{\int_0^{\frac\pi3}\sin \phi \,\mathrm d\phi}{\int_0^{\pi}\sin \phi \,\mathrm d\phi}=1-\frac{\cos 0-\cos \frac\pi3 }{\cos 0-\cos \pi}=\frac34.$$
Warning: $p_4=\frac23+\frac{\sqrt 3}{4\pi}\ne \frac45$. |
H: Finding the largest $n \in \mathbb{N}$ for which $n-7$ divides $n^3-7$
I want to find the largest $n \in \mathbb{N}$ for which $n-7$ divides $n^3-7$. In other words, I am looking for the largest $n$ such that $\frac{n^3-7}{n-7}$ is an integer. Can anyone provide me with a hint? Please do not post a full solution.
AI: Hint: Again, the same idea, but avoiding polynomial division. Write $$n\equiv 7\pmod {n-7}$$ so $$n^3-7 \equiv 7^3-7\pmod {n-7}$$ |
H: Motion with acceleration and deceleration
Stumbled on this while working on a program code to display motion animation. The scenario is, the program will have a function to move a picture from its current position to move right. The program took only 2 variables, which is the travel distance and time to cover the distance. The distance is in pixel and the duration is in frames (in this case, the program runs in $60$ frames / second). Linear movement/zero acceleration is not a problem, the velocity can be found by simply by distance/time. But I'm stuck as the program requires acceleration and deceleration into the motion. The scenario is divided into 3 phases :
The first $30%$ of time will be accelerating, from $0$ slowly increases its speed
The next $40%$ of time, movement has no acceleration, during this, the speed always the same
the last $30%$ of time will be decelerating, speed from the last speed of 2nd phase and slowly decrease its speed until it stops.
I'm not good at physics and what makes this tricky for me is that the only information I have is the total distance to be covered and the time needed to reach that distance. There are no limits on the velocity/acceleration.
Lets take an example: a picture needs to be moved $300$ pixels to the right in $70$ frames.
My question is:
What is the equation to calculate velocity in each frames for each phase in the above scenario?
What is the equation of acceleration in phase $1$ and $3$?
Any help will be appreciated :) Thanks in advance
AI: You need a function $f\colon[0,T]\to[0,d]$ (your specific example has $d=300$ and $T=70$) that looks like
$$ f(t)=\begin{cases}\frac12at^2&\text{if }t<\frac3{10}T,\\
vt+x_0&\text{if }\frac3{10}T\le t\le \frac7{10}T,\\
d-\frac12a(T-t)^2&\text{if }t>\frac7{10}T.\end{cases}$$
By symmetry, $vt+x_0=v(t-\frac T2)+\frac d2$.
The facts that $f$ and its derivative $f'$ are continouous translate to the conditions
$$ a\cdot\frac3{10}T = v$$
and
$$ \frac12a\cdot\frac9{100}T^2= -\frac15vT+\frac d2.$$
From this we find
$$ a=\frac{200}{21}\cdot \frac d{T^2}$$
and $$v = \frac{20}{7}\cdot \frac dT.$$
Remark: Spline interpolation of the coordinates offsets may be a good and fast alternative. |
H: Product topology with finer/coarser comparison
There is a question in Munkres' Topology which has me a little confused:
Let X have topologies $\mathfrak{T}$, $\mathfrak{T'}$, and Y have a topologies $\mathfrak{U}$,$\mathfrak{U'}$. Show that if $\mathfrak{T} \subset \mathfrak{T'}$ and $\mathfrak{U} \subset \mathfrak{U'}$, then the product topology on X$\times$Y under $\mathfrak{T'},\mathfrak{U'}$ (I'll denote by $\mathfrak{T'}\ast\mathfrak{U'}$) is finer than the product topology under $\mathfrak{T},\mathfrak{U}$ (I'll denote by $\mathfrak{T}\ast\mathfrak{U}$). Is the converse true?
So the first part was fine. The second I thought might be true and proved it, but I immediately found a counterexample. My question is where is the wrong step in the "proof" I got down.
Let $\mathfrak{B},\mathfrak{B'}$ be bases for $\mathfrak{T},\mathfrak{T'}$, respectively, and $\mathfrak{C},\mathfrak{C'}$ for $\mathfrak{U},\mathfrak{U'}$, respectively. For $x\in B\in \mathfrak{B}$ and $y\in C\in \mathfrak{C}$, then $(x,y)\in B\times C$ which is a basis element for $\mathfrak{T}\ast\mathfrak{U}$. Since it's finer, there is a basis element $U\times V$ of $\mathfrak{T'}\ast\mathfrak{U'}$ such that $(x,y)\in U\times V\subset B\times C$ ($U$ open in X, $V$ open in Y under $\mathfrak{T'},\mathfrak{U'}$, resp). So $x\in U, y\in V$, and since they are open, there exist $B'\in \mathfrak{B'}, C'\in \mathfrak{C'}$ with $x\in B'\subset U$ and $y\in C'\subset V$. Since $U\subset B$ and $V\subset C$ then $x\in B'\subset B$, $y\in C'\subset C$. Then $\mathfrak{T'}$ is finer than $\mathfrak{T}$, and $\mathfrak{U'}$ is finer than $\mathfrak{U}$.
As I said, I know something must be wrong. I know it's stupid, but I actually cannot find the wrong step here.
Thanks in advance.
EDIT: the counter-example I found was: X= {a,b,c}, Y={1,2,3}, $\mathfrak{T}$={X,{a},{a,c}}, $\mathfrak{T'}$={X,{a},{b},{a,b}}, $\mathfrak{U}$={Y,{1}}, $\mathfrak{U'}$={Y,{1},{2},{1,2}}. Of course, now I see this just confirms the contrapositive of the converse, so I was wrong about being wrong...
AI: The converse is actually true. Your proof appears OK--you must be using the criterion given in Lemma 13.3? In this case I think there is a nicer proof: Take $U\in \mathfrak{T}$ and $V \in \mathfrak{U}$. Then $U \times V \in \mathfrak{T} \ast \mathfrak{U}\subseteq \mathfrak{T}' \ast \mathfrak{U}'$. But the projections $X \times Y \to X$ and $X \times Y \to Y$ are open maps (with respect to $\mathfrak{T}' \ast\mathfrak{U}'$, $\mathfrak{T}'$, and $\mathfrak{U}'$), so $U \in \mathfrak{T}'$ and $V \in \mathfrak{U}'$. |
H: Train and horseman velocity
This is a more or less easy exercise but there is one point I do not understand.
We have a train starting at $(0|0) $with velocity $v_t=20 m/s$ straight in the $y$-direction and a horseman at $H=(100|100)$ with velocity $v_h=15 m/s$
There are two questions now:
1) In which direction does the horseman need to ride to meet the train at the same time on the line of the train.
2) What is the minimum velocity for the horseman to meet the train? In which direction does he have to go now?
You can draw a picture and will immediately see two things (to solve 1):
$(y-100)^2+100^2=x^2$ where is y the lenght from $0$ to the meeting point $M=(0|y)$ on the $y$-axis and $x$ is the lenght fromthe horseman to the meeting point. We also see that $\frac{x}{y}=\frac{15}{20}$. Now we can solve the two equations and get $y$ and $x$, the direction is therefore $HM$.
For the second part I have $\frac{\sqrt{(y-100)^2+100^2}}{20}=\frac{x}{v_{min}}$. Now I want to get somehow an equation like $x_{1,2}=...$ where only $v_{min}$ appears. Setting the root inside to 0 will lead to $v_{min}$, but how?
AI: For the second, the distance the train covers is $y$ and the time is $\frac y{20}$. The horse covers $\sqrt{(y-100)^2+100^2}$ in time $\frac {\sqrt{(y-100)^2+100^2}}{v_h}$. Setting these equal we have $\frac y{20}=\frac {\sqrt{(y-100)^2+100^2}}{v_h}$ or $v_h=\frac {20\sqrt{(y-100)^2+100^2}}{y}$. Now you can take the derivative with respect to $y$ and get home. |
H: Find all functions $f$ so that $d(f(x))=x$ for every natural $x$.
Help me find all functions $f(x)$, $f:\mathbb N \to \mathbb N$, so that $d(f(x))=x$ for every natural number $x$ where $d(x)$ is number of divisors of $x$.
My works until now: Clearly $f(1)=1$. We also have $d(n)\leq2\sqrt{n}$ so $d(f(x))=x\leq2\sqrt{f(x)}$ so $\frac{x^2}{4}\leq f(x)$. Using Fermat's little theorem and a little work on the function i also achieved that for every odd prime number $p$, $p|f(p)-1$.
I would appreciate any help.
AI: Recall that we can factor any number $k$ as $p_1^{n_1} p_2^{n_2} p_3^{n_3} \ldots$ where $p_1, p_2, p_3 \ldots$ and unique primes and ${n_1} ,{n_2} ,{n_3} \ldots \geq1$. Then the number of factors of $k$ is $(n_1+1)(n_2+1)(n_3+1)\ldots$
So in fact first we should determine the factors of $x$ which we can use to choose values of $n_i$ which we can then assign to any primes of our choosing to give a value which we can assign to $f(x)$ |
H: Conjecture on combinate of positive integers in terms of primes
Along a heuristic calculation, I am struggeling with a possible proof for my following conjecture:
Every positive integer $n\in \Bbb N$ can be written as a unique combination of $a,b \in \Bbb N$, $m\in \Bbb N_0$ and $p \in \Bbb P$ (a prime), such that:
$$n=a \,p^{m+1}-b\,p^m$$
Has anyone heard yet about such a problem? What might be the proof?
AI: Counterexamples to uniqueness are not hard to find. For example, $36=(2)(3^3)-(2)(3^2)$, and $36=(5)(2^3)-(1)(2^2)$.
Already even if we stick to the single prime $3$, we can express $36$ in infinitely many ways as $(a)(3^3)-(b)(3^2)$. |
H: Proving existence of a surjection $2^{\aleph_0} \to \aleph_1$ without AC
I'm quite sure I'm missing something obvious, but I can't seem to work out the following problem (web search indicates that it has a solution, but I didn't manage to locate one -- hence the formulation):
Prove that there exists a surjection $2^{\aleph_0} \to \aleph_1$ without using the Axiom of Choice.
Of course, this surjection is very trivial using AC (well-order $2^{\aleph_0}$). I have been looking around a bit, but an obvious inroad like injecting $\aleph_1$ into $\Bbb R$ in an order-preserving way is impossible.
Hints and suggestions are appreciated.
AI: In what follows, by a "real", I mean a subset of $\omega\times\omega$, that is, a binary relation on $\omega$. (You can start with a bijection $\pi:\mathbb R\to\mathcal P(\omega\times\omega)$, which can be constructed without choice, so this is fine.)
If this relation happens to be a well-order of $\omega$ in order type $\omega+\alpha$, map the real to $\alpha$. Otherwise, map the real to $0$. This map is a surjection.
By the way, without choice, you cannot inject $\aleph_1$ into $\mathbb R$. |
H: What is the number of possible seating arrangements?
I have an exercise to do: How many possibilities for placing 12 apostles if it is important who is sitting next to who but it is not important from which side?
Answer possibilities:
a)
b)
c)
d)
It is possible that more than one option is correct and also that not any option is correct.
In my opinion the good answer is d, but on my class some people have different opinions. Could someone help me with that?
AI: Imagine for a moment that the apostles are just standing in a line. There would be $12!$ ways to arrange them. They are actually in a cycle, however, and any of the $12$ rotations of that cycle counts as the same arrangement for us. Moreover, flipping the cycle also counts as the same cycle for us. Dividing out the redundancies gives
$$
\frac{12!}{12 \cdot 2} = \frac{11!}{2}.
$$ |
H: Inner product on a von Neumann algebra
Let $M$ be a $\sigma$-finite von Neumann algebra (one which admits a faithful normal state) acting on a Hilbert space $H$. Denote its faithful normal state by $\omega$.
We can define an inner product on $M$ by
$$\left< x, y \right> := \omega (y^*x).$$
Let $(\pi, K, \xi)$ be a cyclic GNS representation of $(M, \omega)$, then
$\omega (x) = \left<\xi, \pi(x)\xi\right>$. So we can see that the norm which comes from this inner product is
$$\|x\|_L^2= \left< x, x \right> = \omega(x^*x)= \left< \xi, \pi(x^*x) \xi \right> = \left\| \pi(x)\xi\right\|^2. $$
Since we can identify $M$ with $\pi(M)$ then we have that
$\|x\|_L = \|x\xi\|$.
We see from that the relation with the standard norm on $\mathcal{B}(H)$, denote it by $\| \cdot \|_{\infty}$, and $\| \cdot \|_L$ are the following $\|x\|_L \leq \|x\|_{\infty}$.
My questions are: Is $(M, \| \cdot \|_L)$ a Hilbert space? I know that if $M=\mathcal{B}(\mathbb{C}^n)$ that the answer is true and I doubt that it is true when $H$ is infinite dimensional.
If the answer is no, how the complition of $M$ with $\| \cdot \|_L$ looks?
If the answer is yes, are $\|\cdot \|_L$ and $\| \cdot \|_{\infty}$ equivalent?
AI: Is $(M, \Vert \cdot \Vert_L)$ a Hilbert space?
No, it is not in general a Hilbert space. This is clear from the commutative case.
If $M$ is of the form $L_\infty(\Omega, \mu)$ and the state $\omega$ is given by integrating against the finite measure $\mu$, then $(M, \Vert \cdot \Vert_L) = (L_\infty(\Omega,\mu) \cap L_2(\Omega, \mu), \Vert \cdot \Vert_2)$, which is usually not complete. Of course, one can complete $(M, \Vert \cdot \Vert_L)$ in the $\Vert \cdot \Vert_L$-norm to obtain a Hilbert space.
Are $\Vert \cdot \Vert_L$ and $\Vert \cdot \Vert_\infty$ equivalent?
No, they are usually not equivalent. The norms $\Vert \cdot \Vert_L$ and $\Vert \cdot \Vert_\infty$ are not usually equivalent as can be seen by considering elements $x_n$ in $L_\infty([0,1])$ with $\Vert x_n \Vert_2=1$ and
$\Vert x_n \Vert_\infty=n$.
What does the completion of $M$ with respect to $\Vert \cdot \Vert_L$ look like?
I'm not sure exactly what you're looking for here. Unless we have a fairly explicit description of $M$, it will be hard to give an explicit description of its completion with respect to $\Vert \cdot \Vert_L$. So I will give a very explicit example and then make some general remarks.
Let $\Gamma$ be a countable discrete group and let $L(\Gamma)$ be its group von Neumann algebra. I now recall how to construct $L(\Gamma)$. Let $\ell_2 (\Gamma)$ be the Hilbert space with orthonormal basis $\{\delta_g \mid g \in \Gamma\}$. Let $\lambda_g$ (for $g \in \Gamma$) be the bounded linear operator in $\ell_2(\Gamma)$ satisfying $\lambda_g \delta_h=\delta_{gh}$, for each $h \in \Gamma$. Then $L(\Gamma)$ is the von Neumann algebra (inside $B(\ell_2(\Gamma)$)
$$
\{ \lambda_g \mid g \in \Gamma \}'' = \overline{\textrm{span} \, \{\lambda_g \mid g\in \Gamma\}}^{wot},
$$
where $''$ denotes the double commutant, and $wot$ stands for the weak operator topology.
Let $\omega$ be the normal faithful state on $L(\Gamma)$ satisfying $$\omega(\lambda_g)=( \lambda_g \delta_e , \delta_e ),$$
where $e$ is the identity element of $\Gamma$. Then
the completion of $(M, \Vert \cdot \Vert_L)$ will be isometrically isomorphic to $\ell_2 (\Gamma)$ via the map sending $\lambda_g$ to $\delta_g$.
What can we say more generally?
Let's specialise for the moment to the case where the state $\omega$ is tracial. The completion of $(M,\Vert \cdot \Vert_L)$ is then denoted by $L_2(M)$ (and one can define $L_p(M)$ spaces as well by completing $M$ with respect to the norm $\Vert x \Vert_p= \omega( |x|^p)^{1/p}$). (In fact, if one works harder, one can define these noncommutative $L_p$-spaces when $\omega$ is not a trace.) These spaces can be considered as abstract completions of $M$ or they can be described spaces of (unbounded) operators satisfying certain conditions. But that approach gets quite technical. |
H: Isn't every subset of a compact space compact?
Let $\mathfrak{C}$ be an open cover of a topological compact space $X$, and let $\mathfrak{B}\subseteq \mathfrak{C}$ be its finite subcover. Then every subset of $X$ also has the same cover and subcover! Shouldn't every such subset also be compact then?
Motivation: I read somewhere that every closed subset of a compact space is compact. I wonder why it shouldn't be the case for every subset of $X$, rather than just the closed ones.
AI: The point is that EVERY cover must admit a finite subcover, and only because this is true for the ambient space, does not mean it is true for every subset.
So given a compact space $X$ and $U \subset X$ arbitrary, you start with an open cover of $U$, and try to find a finite subcover of it.
But not every cover of $U$ comes from a cover of $X$, not even if $U \subset X$ is open. Consider $X=[0,1] \subset \Bbb{R}$, and $U=(0,1) \subset [0,1]$ and the cover
$$
U=\bigcup_{i=1}^{\infty}\left(\frac{1}{n},1-\frac{1}{n} \right)
$$
There is no open cover of $[0,1]$ which restricts to the above cover on $(0,1)$. |
H: Which of the following numbers can be orders of a permutation $\sigma$ of $11$ symbols
Which of the following numbers can be orders of a permutation $\sigma$ of $11$ symbols, such that $\sigma$ does not fix any symbols?
$1. \;18$
$2.\; 30$
$3.\;15$
$4.\; 28$
could any one just give me hints?
AI: Think about the orders of the permutations (1 2) (3 4 5) and (1 2) (3 4 5 6) and see if you can think of a rule that gives the order of a permutation written in disjoint cycle notation. Now if $\sigma$ fixes no points, what does it tell you about the permutation written in disjoint cycle notation? From this you should be able to tell which orders are possible. |
H: $f=g\; ; \;\bar{\mu}$-a.e. vs $\mu$-a.e.
Let $(X,\mathcal{M},\mu )$ be a measure space and let $(X,\overline{\mathcal{M}},\overline{\mu})$ be its completion. Let $f,g$ be in the set of the union of $\mathcal{M}$ and $\overline{\mathcal{M}}$-measurable functions.
It seems to me that $f=g$ a.e. is the same for both $\mu$-a.e. and $\overline{\mu}$-a.e. My textbook Folland likes to write $f=g$ $\overline{\mu}$-a.e., is there any reason to differentiate between them?
I'm using the definition that $f=g$ $\mu$-a.e. if there exists an $N\in \cal{M}$ such that $ \{ x: f(x) \ne g(x) \} \subset N$ and $\mu(N)=0$. Maybe sometimes people use other definitions?
AI: Just as a remark, since it is easier to be $\overline {\mathcal M}$-measurable, $ {\mathcal M}|\mathscr B(\Bbb R)\cup\overline {\mathcal M}|\mathscr B(\Bbb R) = \overline {\mathcal M}|\mathscr B(\Bbb R)$. Now, it depends how you define that $f=g$ $\mu$-a.e. If this is defined by $\mu(f\neq g) = 0$ then it is a stronger condition than the one $\bar\mu(f \neq g) = 0$ since in the former case the set $\{f\neq g\}$ has to be in $\mathcal M$, whereas in the latter it can belong to a larger class of sets.
Instead, if the definition is
$f=g$ $\mu$-a.e. if and only if $\{f\neq g\}\subseteq A$ for some $A$ s.t. $\mu(A) = 0$
then there is no difference between $\mu$-a.e. and $\bar\mu$-a.e. |
H: Find function $f$ having it's gradient
I need to find function $f$ having it's gradient.
$$
\operatorname{grad} f = (x \cos y − 2)j + i \sin y
$$
I think $i$ and $j$ are some sort of vectors, but I'm not sure..
how to find function f?
I know, that a gradient is a vector of the fastest growth, but I don't see any vector right here.
AI: Searching a proper $f(x,y)$ will lead us to have $$f_x=\sin y,~~~f_y=x\cos y-2$$ Let $f_x=\sin y$ then $f(x,y)=x\sin y+h(y)$ for some proper function $h(y)$. Now we have $f_y=x\cos y+h'(y)$, but $f_y=x\cos y-2$. So, $$h'(y)=2$$ and then $h(y)=2y$. |
H: Probability question
Suppose , there are 3 red balls and 2 black balls in a box .
I have to find probability of choosing 2 black balls .
Then what is wrong with this solution .
for 1st ball being black 2/5 , and for second ball being black it is 1/4 , after 1st ball is black .
Then total probability is 1/10 .
If I do , 2C2/5C2 answer is 1/10 .
what is the difference in these two approaches ? Will they always yield the correct answer ?
What if the events aren't independent ?
AI: For the first choice there are two black balls in the box out of five balls in total so the probability of choosing a black ball is 2/5 not 1/5
For the edited version: One method you can think of as looking at the problem step by step whereas one looks at the entire process as a whole i.e. do I look at the choices for each individual ball picked or just the overall choices for two balls. Both are valid for these problems and give the same answer. In the case of independent events, the second method won't work as $^nC_r$ chooses r objects from n but inherently without repetition whereas independent events allow for repetition. |
H: Seeking formal explanation of definite integral over infinitesimal interval
Why does :
$$\frac{\displaystyle\int_t^{t+h}f(s) \, ds}{h} = f(t)\text{ as }h \rightarrow 0\text{ ?}$$
Intuitively this makes sense, because the value of the integral is infinitesimally close to $f(t)$, you have $h$ of them, and you divide the result by $h$. However, I'd like a more formal explanation if possible.
AI: We will assume that $h$ approaches $0$ from the right. A similar argument deals with the general case. We will need to make some assumptions about $f$, since the result does not hold in general. We will show that if $f$ is continuous in some interval $[t,t+b]$, then
$$\lim_{h\to 0^+} \frac{1}{h}\int_t^{t+h} f(s)\,ds=f(t).$$
We need to assume (or prove) certain properties of the definite integral. Let $m_h$ be the minimum of $f(s)$ in the interval $[t,t+h]$, and let $M_h$ be the maximum of $f(s)$ in this interval. These both exist by continuity.
From any of the standard definitions of the definite integral, we can show that
$$h\, m_h \le \int_t^{t+h} f(s)\,ds\le h\, M_h.$$
Divide through by $h$. We get
$$m_h \le \frac{1}{h} \int_t^{t+h} f(s)\,ds\le M_h.$$
Now let $h\to 0^+$. Then $m_h\to f(t)$ and $M_h\to f(t)$, and now Squeezing gives us the desired result.
Another way: Or else we can use more machinery, like the Fundamental Theorem of Calculus. Consider the function $g(h)$ given by
$$g(h)=\frac{\int_t^{t+h}f(s)\,ds}{h}. $$
Using L'Hospital's Rule and the Fundamental Theorem, we find that
$$\lim_{h\to 0}g(h)=\lim_{h\to 0}\frac{f(t+h)}{1}.$$
The limit on the right is $f(t)$.
Remark: We stayed away from using the term "infinitesimal," since in the reals there is no positive element with is $\lt \frac{1}{n}$ for every positive integer $n$. One could reframe the first argument above using the language of non-standard analysis, but there seems to be no great advantage to doing so. |
H: Default way of defining measure on subset of measure space
Let $(X,\mathcal{M},\mu)$ be a measure space and let $E\in \mathcal{M}$. I'm interested in knowing what the default way to define a measure space $(E,\mathcal{M}_E,\mu_E)$ is?
My guess would be the $\sigma$-algebra $\mathcal{M}_E = \{F \cap E : F\in \mathcal{M} \}$ defined $E$ with measure $\mu_E = \mu|_{\mathcal{M}_E}$. In lieu of other information, is this the correct measure space to assume? Also, is there any other sensible way to define it?
I'm asking because I know what it means for $f:\mathbb{R} \rightarrow \mathbb{R}$ to be Lebesgue or Borel measurable, but want to make sure I'm not assuming the wrong thing for when I see someone say that $f:[0,1] \rightarrow \mathbb{R}$ is Lebesgue or Borel measurable.
AI: Your guess is right; it's just a restriction of the measure on the larger space.
On a related note, in probability theory one works in a space whose total measure is $1$ and one often considers the measure
$$
\frac{\mu|_{\mathcal{M}_E}}{\mu(E)}.
$$
Values of this measure are "conditional probabilities given the event $E$".
Later postscript: Actually conditional probabilities given $E$ are often assigned to sets that are not subsets of $E$. The conditional probability of $A$ given $E$ is the value assigned to $A\cap E$ by the measure given in my second paragraph. |
H: Using integration by parts to evaluate an integrals
Can't understand how to solve this math: use integration by parts to evaluate this integrals:
$$\int x\sin(2x + 1) \,dx$$
can any one solve this so i can understand how to do this! Thanks :)
AI: $$\int x \sin (2x+1)\;dx$$
parts by integration:$\int u(x)\cdot v(x) \;dx=u\int v(x)\;dx-\int \dfrac {d}{dx}\,u(x)[\int v(x)\,dx]\;dx$
Here one question is that which function should be taken as $u$ and $v$.so I learnt that use "ILATE". This is the sequnce to choose $u$ and $v$.
I=inverse function,L=log,A=airthematic,T=trigonometry,E=exponential. This will help most of the time in integration by parts.
$$x\int \sin (2x+1)-\int \dfrac {d}{dx}x \int \sin (2x+1)\; dx\;dx$$
$$x(-\dfrac {\cos (2x+1)}{2})-\int 1\cdot (-\dfrac {\cos (2x+1)}{2})\; dx$$
$$\dfrac {-x\cos (2x+1)}{2}+\dfrac {\sin (2x+1)}{4}+C$$ |
H: Does a linear quotient map have sections
Suppose $V$ is a vector space with vector subspace $N$. Then there is
a natural projection
$$
\pi_N: V \to V/N
$$
from the vector space $V$ to the quotient space $VN$ of $V$ modulo $N$.
Does $\pi_N$ have sections?
Means are there 'section maps' $i: V/N \to V$ with $\pi_N\circ i=id_{V/N}$? I'm
not asking for a single natural inclusion map $i$ because, I know that in general
there is none, but I'm asking, if I can assume that there are any sections for
the proection $\pi_{V/N}$.
The question arose, since in other parts of mathematics similar sections are
important (Bundle sections, retracts in homology ect.)
AI: $0 \to N \to V \to V/N \to0$ is exact and $V/N$ is a vector space,i.e. a module over a field thus it's free and a fortiori it's projective. This means that your exact sequence splits and this is equivalent to the existence of the section you're looking for. |
H: How to write this gradient as a vector?
How to write this gradient as a vector using brackets []?
Is this:
$$
\operatorname{grad}f=(x\cos y−2)j+i\sin y
$$
equal to this:
$$
\operatorname{grad}f=[\sin y,x\cos y−2]\text{ ?}
$$
Thanks
Regards.
AI: you have to understand of this manner :
i : indicates the first coordinate
j: indicates the second
then : $grad f= (sin \ y , x cos y - 2 )$ |
H: Question: What does "nonzero" polynomial mean?
What does "nonzero" polynomial mean?
Thank you
AI: Since there is no answer, here is what I stated in the comments:
Usually, a nonzero polynomial $f$ is a polynomial of where not every coefficient is zero, i.e.
$$f(X)=\sum\limits_{k=0}^n a_kX^k\quad(n\ge0)$$
and one of the $a_i\neq 0$.
Depending on context, even the definition that $f(x)\neq0$ for some $x$ could be used, however this is rare.
It might seem as if these were equivalent, however consider
$$f(X)=X^2+X$$ in $\mathbb{Z}/2\mathbb{Z}$. For $0$, it obviously evaluates to zero, for $1$ it is $f(1)=1^2+1=1+1=0$ (because $1+1=0$ in $\mathbb{Z}/2\mathbb{Z}$).
This polynomial is zero regarding the second definition, not regarding the first definition however.
Therefore, the first definition is almost always used. |
H: Matrix Inverses
So in class we have been discussing matrix inverses and the quickest way that I know of is to get a matrix A, and put it side by side with the identity matrix, like $[A|I_{n}]$ and apply the Gauss-Jordan algorithm until it is of the form $[I_{n}|A^{-1}$], where $A^{-1}$ will show up assuming A is invertible.
We also discussed using the formula $A^{-1}=\frac{\operatorname{adj}(A)}{\det(A)}$, however after a few examples, it was clear that this formula would take far too long to find the inverse of A as the matrix size got bigger.
Is the first method I described the quickest way to find a inverse of a matrix or is there a more efficient way?
AI: This depends on whether you're talking about "pen-and-paper" based methods, or computer-based ones.
For pen-and-paper, the Gauss-Jordan elimination is easier and less error-prone than other methods most of the time. In some special cases, the determinants might be easy to calculate because of the matrix having some special form and thus allow the inverse to be found faster -- but for general matrix, it's hard to beat the G-J approach.
On the computer, there are faster methods available (useful mainly for matrices of large dimensions); the fastest ones for general matrices run as fast as matrix multiplication can be performed.
Wikipedia has a nice list of various methods. |
H: Cumulative Probability Distribution Example Question
Flip a fair coin three times. let T be the random variable that denotes the number of tails that occur given that at least one head occurred. Calculate the Probability and Cumulative Distribution Functions.
I was having a little trouble with this question so I decided to enumerate all the possibilities to get started and labeled the value of T in each.
I thought that the outcome "ttt" is removed from the sample space because the question stem says "given that at least one head occurred." Is this an incorrect assumption?
So the total number of possibilites is $2^3 -1 = 7$.
$
\begin{array}{c|c}
\text{outcome} & \text{T} \\
\hline
hhh & 0 & \binom{3}{0}=1\\
\hline
thh & 1 \\
hth & 1 & \binom{3}{1}=3 \\
hht & 1 \\
\hline
tth & 2 \\
tht & 2 & \binom{3}{2}=3 \\
htt & 2 \\
\hline
ttt & N/A & N/A\\
\end{array}
$
Since this is fair coin toss, each coin toss has a probility of 0.5 and each outcome has a probability of $0.5^3$.
So, I thought the Probability function would be $\Pr(T=t)= \binom{3}{t} \cdot 0.5^3$, but this is incorrect. The solutions list:
$\Pr[T=t]=\Pr[H=3-t \mid H \ge 1] = \cfrac{ \binom{3}{3-t} \cdot \frac{1}{8}}{\frac{7}{8}}\tag{1}$
I did another example which I thought was very similar:
Flip a loaded coin four times. Assume that Pr(head) = 0.7. Let the random variable H denote the number of heads that result. Find the Probability Distribution Function.
I correctly got $\Pr(H=h)=0.7^h \cdot 0.3^{4-h} \cdot \binom{4}{h}$ for $0 \le h \le 4$. Here there's $\binom{4}{h}$ ways of getting a set for a particular value of heads and you multiply by the probabilities of heads and tails.
I realize that the first question is a conditional probability question while the 2nd is not. However, if I limit the sample space as I did from the beginning, I thought my approach was legitimate. What is going on? Thanks in advance.
AI: In general, if $A$ and $B$ are events, then the conditional probability of $A$, given that $B$ has happened, (in symbols, $\Pr(A|B)$), is defined by the formula
$$\Pr(A|B)=\frac{\Pr(A\cap B)}{\Pr(B)}.$$
In our case, we have $B$ is the event "at least $1$ head." Note that $\Pr(B)=\frac{7}{8}$.
To find the probability of exactly $1$ tail, given that there is at least $1$ head, let $A$ be the event "exactly $1$ tail." Then $A\cap B$ is also the event "exactly $1$ tail," and, as you calculated, has probability $\frac{3}{8}$. It follows that the conditional probability of exactly $1$ tail, given there is at least one, is, by the displayed formula above, equal to
$$\frac{\frac{3}{8}}{\frac{7}{8}},$$
which simplifies to $\frac{3}{7}$.
Or we can think of the events HHH, HHT, HTH, HTT, THH, THT, and TTH as equally likely, given there is at least one head. So each has (conditional) probability $\frac{1}{7}$. Now for the conditional probabilities we can count. I do not advocate this approach, since the defining formula is far more versatile.
In the case of your random variable $T$, it turns out to have values $0$, $1$, or $2$, with probabilities $\frac{1}{7}$, $\frac{3}{7}$, and $\frac{3}{7}$ respectively. Note that $T$ is an ordinary random variable, but not the same random variable as the (unconditional) number of heads, since we are disregarding any experiment in which we got $0$ heads. In particular, the probabilities add to $1$. (The probabilities you computed did not.)
Now we calculate the cumulative distribution function $F_T(t)$, which, as you know, is $\Pr(T\le t)$.
If $t\lt 0$, then $F_T(t)=0$.
If $0\le t\lt 1$, then $F_T(t)=\frac{1}{7}$.
If $1\le t\lt 2$, then $F_T(t)=\frac{4}{7}$.
Finally, if $t\ge 2$, then $F_T(t)=1$. |
H: Permuting and finding combinations of bit strings
I am working on typical computer science math and one theme in many text books is to find different permutations/combinations of binary strings:
1011011101 is an example op a binary number of length 10. How many
binary numbers of length 10 end up with 111 and contain exactly two
zeros.
The example above is a typical question and the question is often harder than the maths/math:
*The solution above is : The number is completely defined if we know
the places of the 2 zeros. The number starts with 1 and ends up with
111. There are still six digits to be determined. Once we identify two locations for the zeros, the number is fixed. The number of ways to
choose 2 places out of the 6, is $\bf\binom{6}{2}$.
Question: Why does the string have to start with a 1
I got $\binom{7}{2}$.
for example what about the sting 1111111100, which does not have a 1 in th efirst place?.
I sort of almost get it then slip up one one of the assumptions: do you have any way of solving these binary string questions eg by drawing them out so you don't slip up tha will help me from slipping up?
AI: I think the trick is that the question specifies a binary number, rather than a string, so the normal assumption would be that a number has no leading zeroes (in this case a $10$ digit binary number must start with a $1$).
Of course a binary string is just a sequence of characters from $\{0,1\}$, so $01$ is clearly a different string to $1$, whereas we would consider them the same number.
There is an argument here about numbers vs. their representations, so we could consider that $01$ and $1$ are both representations of the same number, much as $2$ base ten is the same number as $10$ base two, it's just that we have some rules of usage that we often forget we are using when we discard leading zeroes and identify a number with its representation.
So given that it starts with a $1$, and the last three digits are $111$, there's only six spots left that could be a $0$. |
H: What's the formula for $\sum_{n=0}^{\infty}\left ( an+b \right )x^{n}$=?
Use the two formulas
$\sum_{n=0}^{\infty}=\frac{1}{1-x}$ and $\sum_{n=0}^{\infty}\left ( n+1 \right )x^{n}=\frac{1}{(1-x)^{2}}$ to find a formula for this $\sum_{n=0}^{\infty}\left ( an+b \right )x^{n}$ for all pair of constants $a$ and $b$ (both non-zero).
AI: HINT:
$$(an+b)x^n=a(n+1)x^n+(b-a)x^n$$ |
H: A question about chain rule for partial derivatives
I know this isn't supposed to be difficult, but I'm not sure how to use the chain rule.
Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be differentiable and define $u: \mathbb{R}^2 \rightarrow \mathbb{R}$ by $u(x,y):=e^{2x}f(ye^{-x})$. I want to show that $u$ satisfies the partial differential equation $$\frac{\partial u}{\partial x} + y\frac{\partial u}{\partial y}=2u.$$
I know how to take partial derivatives with respect to $x$ and $y$, but I don't know how to implement the chain rule here. How do I do this? I appreciate your help.
AI: Here is how one starts off proving the equality.
Consider differentiating the function with respect to certain variable... We have:
$$\frac{\partial u}{\partial x} = 2e^{2x}f(ye^{-x}) + (-ye^{-x}e^{2x}f'(ye^{-x}))$$
$$= 2e^{2x}f(ye^{-x}) - ye^{x}f'(ye^{-x})$$
$$\frac{\partial u}{\partial y} = e^{-x}e^{2x}f'(ye^{-x})$$
Substitute the given and the following equations for the given equality, which yields:
$$LHS = 2e^{2x}f(ye^{-x}) - ye^{x}f'(ye^{-x}) + y(e^{-x}e^{2x}f'(ye^{-x}))$$
$$= 2e^{2x}f(ye^{-x}) - ye^{x}f'(ye^{-x}) + ye^{x}f'(ye^{-x})$$
$$= 2e^{2x}f(ye^{-x})$$
$$= 2(e^{2x}f(ye^{-x}))$$
$$= 2u = RHS$$ |
H: Is $\lim_{r\to 0}\frac{|A\cap B(x,r)|}{|B(x,r)|}\neq 1$?
Let $A\subset\mathbb{R}^N$ be a Lebesgue measurable set and denote by $|\cdot|$ the Lebesgue measure. Fix some $x\in A$ and $a\in\mathbb{R}^N$. Consider the set $$S_a=\{v\in S^{N-1}:\ \langle a,v\rangle\geq\frac{1}{2}\|a\|\}$$
Suppose that for each $v\in S_a$, there exist $t_v>0$ such that $x+tv\notin A$ for $0<t<t_v$. Can I conclude that $$\lim_{r\to 0}\frac{|A\cap B(x,r)|}{|B(x,r)|}\neq 1$$
where $B(x,r)$ denotes the ball in $\mathbb{R}^N$ with center in $x$ and radius $r$?
Edit: Hagen first answer was right, but it was not what I was expecting, so I have changed a little bit the question.
Thank you
AI: (This was for the original formulation: "Can I conclude that $lim=0$?" instead of "$\lim\ne 1$": Certainly not. Consider $A=\{(x,y)\mid y\le 0\}$, $a=(0,1)$, $x=(0,0)$. Then $t_v=1$ works for all $v\in S_a$ and the limit is $\frac12$.)
Let $$S_{a,n} =\left\{v\in S_a\biggm| \forall t\in\left]0,\frac 1n\right[\colon x+t v\notin A\right\}.$$
Then all $S_{a,n}$ are measurable and ordered by inclusion and we have $$S_a=\bigcup_{n\in\mathbb N}S_{a,n}.$$
For $n$ big enough, we have $|S_{a,n}|>\frac12|S_a|$ and hence
$$\lim_{r\to0}\frac{|A\cap B(x,r)|}{|B(x,r)|} \le \frac{|S^{N-1}\setminus S_{a_n}|}{|S^{N-1}|}<1-\frac{|S_a|}{2|S^{N-1}|}.$$ |
H: Trying to prove a theorem about convergence of a sequence.
The following is the problem that I am working on.
Prove that if $\{s_n\}$ converges, then $\{|s_n|\}$ converges.
The following is the idea of the proof I'm trying to make, so it's a little loose. However, I'm not 100% sure if I'm doing it right.
proof:
If $\{s_n\}$ converges, then $\{s_n\}$ must be bounded and Cauchy, i.e, $\delta(s_n,s_m) < \epsilon$ for a large enough $n,m$.
Consider $\delta(|s_n|,s_m) $.
If $s_n > 0$ then $\delta(|s_n|,s_m) =\delta(s_n,s_m) < \epsilon$ so we are done.
And I'm stuck here. I cannot make a good argument when $s_n<o$. I can see that $\{|s_n|\}$ is bounded above by either $\sup (s_n)$ or $|\inf (s_n)|$, but I remember that boundedness doesn't guarantee convergence. $\{s_n\}$ is not necessarily monotonic either, so I was wondering what I could do.
Can someone help me out ?
AI: By the reverse triangle inequality, we have that $||s_n| - |s_m|| \leq |s_n - s_m|$. Thus your sequence of absolute values is Cauchy as well, so it converges. |
H: Multiplicity of the simple $R$-module $M$ in the semisimple ring $R$
I'm confused about the conclusion of Wedderburn's structure theorem for semisimple rings.
Let's consider the special case where $R=M^n$ as modules for some simple module $M$.
Wedderburn's theorem says that $R=M_n(D)$ for $D=\text{End}_R(M)^{o}$.
Thus, $R=(D^n)^n$ as modules. I see that $D^n$ is a simple $R$-module.
So, since all simple $R$-modules appear in the decomposition of the $R$-module $R$ as a direct sum of simple $R$-modules, I conclude that $M\cong D^n$. In particular, $M$ can be seen as a $D$-module and $n=\dim_D(M)$ (we can talk about dimension since $D$ is a division ring by Schur's lemma).
My question:
Remembering that $D=\text{End}_R(M)^{o}$, is there a natural way to view $M$ as a $D$-module? What is the ring homomorphism $\text{End}_R(M)^o\rightarrow\text{End}_{\mathbb{Z}}(M)$ that describes the action of $D$ on $M$?
There is an obvious way to view $M$ as an $\text{End}_R(M)$-module since $\text{End}_R(M)$ is a subring of $\text{End}_{\mathbb{Z}}(M)$, but since $D$ is the opposite ring relative to $\text{End}_R(M)$ and not $\text{End}_R(M)$ itself, I am confused.
Motivation: I'd like to be able to look at a semisimple ring and ask how many times a simple module $M$ appears in its decomposition by looking at $M$ as an $\text{End}_R(M)^o$-module and computing the dimension.
AI: An abelian group $M$ has a right $D$ module structure iff it has a left $D^{op}$ module structure.
If $M$ is a left $R$ module, then there is the natural operation of $D$ on the right of $M$. Moving over to the opposite ring, this becomes a left $D^{op}$ structure on $M$.
Here's the way to see where the dimension $n$ comes from for a simple Artinian ring. It's easy to prove that all simple left $R$ modules are isomorphic to each other, so you can start by decomposing $R$ into minimal left ideals. Suppose $R=\oplus_{i=1}^n S$ is a direct sum of simple left $R$ modules. Then $D=End(_RS)$ is a division ring, and since $R^{op}\cong End(_RR)=End(\oplus_{i=1}^n S)\cong M_n(End(_RS))=M_n(D)\cong M_n(D^{op})^{op}$. Then we have that $M_n(D^{op})\cong R$ as rings. But clearly ($D^{op})^n$ is a simple left $M_n(D^{op})$ $(=R!)$ module, therefore $S\cong (D^{op})^n$ as left $R$ modules.
Notice that $D^{op}$ is embedded as the constant diagonal matrices in $R$, and multiplication by these matrices is the same as looking at $(D^{op})^n$ as a left $D^{op}$ vector space. Switching to the right, it is a right $D$ vector space.
Now for general semisimple rings, you first just split the ring into simple rings. Then it's easy to show that the simple left modules are just exactly the simple modules for each simple ring in that factorization. So, you can reduce the general case to finitely many cases like the one above. |
H: Probability of k out of m microbes splitting.
A microbe either splits into two perfect copies of itself or else disintegrates. If the probability of splitting is p, what is the probability that there will be m microbes in the nth generation. Furthermore, what is the probability that out of these m microbes, k will split?
So far, I found that the probability that k will split (out of the total of m microbes) is $P=mp^k(1-p)^{m-k}$. I derived this equation by induction (i.e. looking for an equation that agrees with numerical values for small m and k). I am not fully certain that the coefficient m should be there from a formal mathematical point of view, since I think that $P=p^k(1-p)^{m-k}$ makes more intuitive sense (however, the numerical values of P for small m and k are wrong).
How can I combine the probability of k microbes splitting out of m microbes together with the probability of there being m microbes at the nth generation?.
Adapted from Problem 4 in Newman's book, 'A Problem Seminar', available online at Springer.
Related question: Confusion regarding probability of microbe producing everlasting colony.
AI: The probability should actually be
$$\binom{m}{k} p^k (1-p)^{m-k}$$
and is a transition probability, it tells you the probability of having $m+k$ bacteria in the $n+1$th generation given you had $m$ in the $n$th. So, you have a Markov chain with the following master equation:
$$P(m,n)=\sum_{k=0}^{\lfloor m/2 \rfloor} \binom{m-k}{k} p^k (1-p)^{m-2 k} P(m-k,n-1) \; .$$ |
H: Finding minimal polynomial
Let $A \in \mathbb{R}^{n \times n}$ be a symmetric matrix, such that $A$ is not of the form
$A=c I_n, c \in \mathbb{R}$ and $(A-2I_n)^3 (A-3I_n)^4=0$. Find the minimal polynomial of $m_A(x)$of $A$.
I know that $m_A(x) | (x-2)^3(x-3)^4$, but I am stuck here. Any help appreciated.
AI: Here is some Hints:
$1$. A Real Symmetric Matrix is always diagonalizable(Do you know that?)
$2$.A Matrix is diagonalizable $\Leftrightarrow$ its Minimal polynomial factors into distinct linear factor.
Hence $(x-2)(x-3)$ will be the Minimal polynomal. |
H: Bijective conformal map from half disc to upper half plane
I'm trying to find a bijective conformal map from the half disc $\{z: |z| < 1, \Re(z)>0\}$ to the upper half plane $\{z: \Re(z) > 0\}$. Any help is appreciated. Thanks!
AI: Use a Möbius transform to bring one vertex of the half disk to $\infty$ and the other to $0$. Now you have a quadrant. Square to get a half plane. |
H: How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$?
How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$?
I found the inequality while reading a TCS paper, where this inequality was taken as a fact while proving some theorems. I'm not a math major, and I'm not as sufficiently fluent in proving inequalities such as these (as I would like to be), hence I'd like to know, why this is true (it does hold for a range of values of $\epsilon$ from $0$ to $1$), and how to go about proving such inequalities in general.
AI: The function $f(x) = 1 - \left( \tfrac{2}{3} \right)^x$ is increasing and concave down on the interval $[0, 1]$. One consequence of this fact is that the curve lies above the secant line connecting $(0, f(0)) = (0, 0)$ and $(1, f(1)) = (1, \tfrac{1}{3})$. This is the line $y = \tfrac{1}{3} x$, so actually the slightly stronger inequality holds for all $0 \le \epsilon \le 1$:
$$
1 - \left( \frac{2}{3} \right)^\epsilon \ge \frac{\epsilon}{3}
$$
Here's a picture of the curve $y = f(x)$, the secant line $y = \tfrac{x}{3}$, and the line $y = \tfrac{x}{4}$. |
H: Equivalence between these definitions of ordinal numbers
Von Neumann defines an ordinal $\alpha$ as a transitive set whose elements are well-ordered with respect to the membership relation $\in$.
Meanwhile, in Naive set Theory, Halmos defines an ordinal $\alpha$ as a well-ordered set such that $\forall \zeta \in \alpha$, the initial segment determined by $\zeta$ is $\zeta$ itself. That is : $s_\zeta = \zeta$ with \begin{equation}s_\zeta := \{\beta \in \alpha :~~ \beta < \zeta\}.\end{equation}
Althought my intuition tells me that these definitions are equivalent, I'm not able to prove it in clean and clear way.
Here is what I got so far :
For $1. \Leftarrow 2.$, the transitivity of the set $\alpha$ follows from the condition $s_\zeta = \zeta$ since $s_\zeta \subseteq \alpha$ for all for all $\zeta \in \alpha$.
Also, from the condition $s_\zeta = \zeta$, we have that the predecessors of $\zeta$ in the well-ordered set $\alpha$ are exactly the element of $\zeta$.
All that is lacking to turn this into an actual proof of 1. is a bit of structure...
For $1. \Rightarrow 2.$, since $\alpha$ is well-ordered by membership we have $s_\zeta \supseteq \zeta$. I'm currently working on the reverse inclusion using transitivity but once again the whole thing lacks clarity.
AI: Let $\alpha$ be a Neumann ordinal.
By definition,$\alpha$ is well-ordered by $<\;:=\;\in$.
Let $\zeta\in \alpha$. Then for $\beta\in\alpha$ we have $\beta<\zeta\iff \beta\in \zeta$. Hence $s_\zeta\subseteq \zeta$ and more precisely $s_\zeta=\zeta\cap\alpha$. But by transitivity, $\zeta\in\alpha$ implies $\zeta\subseteq \alpha$ so that indeed $s_\zeta=\zeta$. We conclude that $\alpha$ is a Halmos ordinal. |
H: A Complete k-partite Graph
A complete $k$-partite graph is a graph with disjoint sets of nodes where there is no edges between the nodes in same set, and there is an edge between any node and other nodes in the other subsets.
The subsets are $X_1,\dots,X_p$, and the number of nodes is $N$. What is the maximum number of edges that can be in this complete $k$-partite graph?
AI: Divide up the vertices into your $k$ sets of sizes $a_{1}+...+a_{k}=N$ and consider a vertex in the first set. It has $a_{2}+...+a_{k}$ possible edges to make, therefore the contribution of the first disjoint set is $a_{1}(a_{2}+...+a_{k})$ edges. Since this first set has made as many edges as it may, we then 'ignore' these $a_{1}$ vertices and proceed in a similar fashion with the remaining disjoint sets. The total will amount to $a_{1}(a_{2}+...+a_{k})+a_{2}(a_{3}+...+a_{k})+...+a_{k-1}a_{k}$ edges.
If your sets are all of size $n/k$ then the total number of edges should at max be $\frac{n^{2}}{k^{2}}[(k-1)+(k-2)+...+1]=\frac{n^{2}(k-1)}{2k}$. |
H: Derivate of finite sum?
I'm having trouble finding the derivative of the following finite sum:
$$\frac{\mathrm{d}}{\mathrm{d}n} \sum\limits_{k=1}^{n-1} \ln\frac{n-k}{n}$$
I get the following:
$$\frac{1}{n-1}+\frac{1}{n-2}+\ldots+\frac{-1}{n}+\frac{-1}{n}$$
I know the closed form is:
$$\frac{\mathrm{d}}{\mathrm{d}n} \sum\limits_{k=1}^{n-1} \ln\frac{n-k}{n} =
\frac{\mathrm{d}}{\mathrm{d}n} \ln\frac{n!}{n^n} =
\mathrm{polygamma}(n+1) - \ln(n) - 1$$
But I can't seem to figure out why it cannot be evaluated by taking the derivative of each term. Thanks in advance!
AI: You can not differentiate the terms individually and then sum them, since the number of terms in the sum depends on the variable we are differentiating with. |
H: Combinatorics: Is it true to say $s \in F$ where $s \in S$ and $F$ is a set of subsets of $S$
From this problem: (from Measure Theory by Paul R. Halmos, Springer-Verlag, 1950)
Let $S$ be a set. Suppose that $s$ is an element of $S$, $T$ is a subset of $S$, and $F$ is a set of subsets of $S$. How many statements of the form $X R Y$ are possible, where $X$ and $Y$ are each taken from $\{S, s, T, F\}$ and $R$ is taken from $\{\in, \subseteq\}$? Classify each statement as always true, possibly true, or always false.
Is it true to say: $ s \in F $?
Ex: $S = \{1,2,3\}, s = 1,$ and $F = \{\{1\}, \{2,3\}\}$.
I'm thinking that it isn't true because: $1 \neq \{1\}$. Can anyone confirm?
EDIT: Thank you! I can continue with the problem now.
AI: In your example, indeed $s\notin F$ because $1\ne\{1\}$ (and $1\ne\{2,3\}$).
But with $S=\{\{1\},1,2\}$, $s=\{1\}$, $F=\{\{1\}\}$, you do have $s\in F$. |
H: What will happen if I try to print an impossible solid into a 3D printer?
What would be the result of a 3D modeled impossible solid, like the Penrose Triangle, printed out of a 3D printer?
AI: Gershon Elber of the Computer Science Department at Technion in Isreal works on creating similar objects by way of introducing contortions of 3D printed media, that when viewed from the right angle, produce the appropriate visual illusion of an impossible objects.
Here is his work on Escher For Real and a Youtube video demonstrating the technique and 3D printed fabrication. |
H: Binomial Expansion Word Problem (Creating a Equation)
I was working on my math textbook (Nelson Functions 11) and came across the following word problem. This question is shown in the "Pascal's Triangle and Binomial Expansions" section of the book.
"Using the diagram at the left (shown below), determine the number of different ways that Joan could walk to school from her house if she always travels either north or east."
(Please note that Joan walks on the black lines and not on the white squares)
Diagram:
I'm pretty sure that the question involves me simply finding a binomial expression in the form (a+b)^n and expanding it, as that was what we learned in class before this question was assigned. However, I have no idea how that or Pascal's Triangle could relate to this problem (looks more like probability to me) and I don't know how to find the equation used to solve this problem.
AI: HINT: Label each intersection with the number of ways in which Joan can reach it. She can reach her house in exactly one way: she takes no steps. She can reach the intersection a block to the east in one way: she must take one eastward step. Similarly, she can reach the intersection a block to the north in one way: she must take one northward step. She can reach the intersection one block to the northeast of her house, however, in two ways: she can go east, then north, or she can go north, then east.
I’ve started the labelling process in the grid below, whose cells correspond to the street intersections, not to the city blocks; I’ve included two more labels besides the ones mentioned above. Note that in general, she can reach each intersection either from the one immediately south of it or from the one immediately west of it, unless it’s on the southern or western boundary of the square.
$$\begin{array}{|c|c|c|c|c|} \hline
?&?&?&?&?&?\\ \hline
?&?&?&?&?&?\\ \hline
?&?&?&?&?&?\\ \hline
1&3&?&?&?&?\\ \hline
1&2&3&?&?&?\\ \hline
1&1&1&?&?&?\\ \hline
\end{array}$$
If you finish filling in the labels correctly, both the labels themselves and the way in which you calculated them should give you the connection with Pascal’s triangle. |
H: How do you solve this equation: $10 = 2^x + x$?
Is it possible to solve this equation?
\begin{align}
a &= b^x + x \\
a-x &= b^x \\
\log_b(a-x) &= x
\end{align}
If $a$ and $b$ are known, how do you find $x$?
AI: As noted in the comments, since the function is monotonically increasing, there is only one real solution. To find the solution for say $10-2^x-x=0$, you can use Newton's method to iterate:
$$x_{n+1} = x_n-\frac{10 - 2^x - x}{2^x\ln 2-1}$$
Alternatively, you could use the Lambert $W$ function to write the solution as:
$$x = 10 -\frac{W(2^{10} \ln(2))}{\ln(2)}$$
To get to this, use the definition:
$$x = W(x)\exp(W(x))$$
And define $y= 10-x$, and use $W(y)$. |
H: Relationship Between Tangent Function and Derivative
Is there a relationship between the trigonometric function tan(x) and the derivative of y with respect to x? Are they just named similarly by coincidence?
AI: Do you mean $\frac {dy}{dx}$ is reminiscent of the triangle definition of tangent $\tan \theta=\frac{\text{opposite}}{\text{adjacent}}$?
The derivative of a function at a point can be interpreted as the slope of the tangent line to that point on the graph of the function. This is distinct from the function tangent, which can be geometrically interpreted as the length of a special tangent to a unit circle (see below) given a certain angle.
You could connect them in a roundabout way - if you take the tangent line to a function's graph at a certain point, then extent it to intersect the x-axis, the tangent of the angle it forms with the x-axis (measured counterclockwise from the x-axis) will be the derivative of the function at that point. |
H: Proof of correctness of Putzers algorithm
I have a question regarding the proof (seen below) of Putzers algorithm for matrix exponentiation. It's written by our danish lecturer at the university, so I translated the important parts into somewhat readable english.
What I don't understand is the line:
$$\sum^{n-1}_{k = 0} (\lambda_{k+1} r_{k+1}(t) + r_r(t))P_k - A \sum^{n-1}_{k = 0} r_{k+1}(t)P_k = \sum^{n-1}_{k = 0} (-r_{k+1}(t)(A - \lambda_{k+1}I)P_k + r_k(t)P_k)$$
Can anyone tell me how to go from the left hand side to the right hand side?
Definition:
Let $A \in Mat_{n, n}(\mathbb{C})$. Let $\lambda_1, \dotsc, \lambda_n$ be the eigenvalues of $A$ (counting algebraic multiplicity). Let $P_0 = I$ and for $k = 1, \dotsc, n$, $P_k = \Pi^{k}_{j = 1}(A - \lambda_j I)$. Define
\begin{equation*}
Q(t) = \sum^{n-1}_{k = 0} r_{k+1}(t)P_k
\end{equation*}
where $r_1(t) = e^{\lambda_1 t}$ og for $k = k, \dotsc, n$
\begin{equation}
r_k(t) = e^{\lambda_k t} \int^{t}_{0} e^{-\lambda_k s} r_{k-1}(s) \, d s
\end{equation}
It is now true that $Q(0) = I$ and $Q'(t) = A Q(t)$
Proof:
We observe that $r_1(0) = 1$ and $r_k(0) = 0$ for $k = 2, \dotsc, n$, so $Q(0) = P_0 = I$.
For $k > 1$ we now have that
\begin{align*}
r'_k(t) &= \lambda_k e^{\lambda_k t} \int^{t}_{0} e^{-\lambda_k s} r_{k-1}(s) \, ds +
e^{\lambda_k t} e^{-\lambda_k t} r_{k-1}(t)\\
&= \lambda_k r_k(t) + r_{k-1}(t)
\end{align*}
Defining $r_0(t) = 0$ we also see that this is true for $k = 1$.
We now have
\begin{equation*}
Q'(t) = \sum^{n-1}_{k = 0}r'_{k+1}(t) P_k = \sum^{n-1}_{k = 0} (\lambda_k r_k(t) + r_{k-1}(t))P_k
\end{equation*}
And therefore
\begin{align*}
Q'(t) - A Q(t) &= \sum^{n-1}_{k = 0} (\lambda_{k+1} r_{k+1}(t) + r_r(t))P_k - A \sum^{n-1}_{k = 0} r_{k+1}(t)P_k\\
&= \sum^{n-1}_{k = 0} (-r_{k+1}(t)(A - \lambda_{k+1}I)P_k + r_k(t)P_k)\\
&= \sum^{n-1}_{k = 0} (-r_{k+1}(t)P_{k+1} + r_k(t)P_k)\\
&= -r_n(t)P_n\\
&= 0
\end{align*}
Since $(-1)^n P_n = p_A(A) = 0$ from the Cayley-Hamilton theorem.
AI: Put everything into one sum, rearrange the summands (we omit $\sum$s and $P_k$s below):
$$ (\color{Blue}{\lambda_{k+1}} \color{Teal}{r_{k+1}} + \color{Purple}{r_k}) - \color{Blue}{A}\color{Teal}{r_{k+1}} = -\color{Teal}{r_{k+1}}\color{Blue}{(A-\lambda_{k+1}I)} + \color{Purple}{r_k}.$$ |
H: Retract and Homotopy extension property
See picture below the following picture.
According to Hatcher, homotopy extension property implies that for a pair $(X,A)$ where $A$ is a subspace of $X$,
$X\times I$ should retract to $X\times\{0\}\cup A\times I$ .
My question is whether the retract given in the picture is possible ? If yes, how would we do it. Seems to me it is impossible to do the retraction continuously.
AI: It's definitely doable. Let's consider a simpler example first: let $X=[0,1]$, and let $A=\{0\}$.
You can retract $X\times I$ (a square) to $(X\times\{0\})\cup(A\times I)$ (the union of the "bottom" and "left" sides of the square) by projecting each point along the ray from $(2,2)$:
To move this intuition to your example of $X=$ a disk and $A=$ a smaller disk inside $X$, just "swing this around" (as one would to form a solid of revolution) and leave the interior of $A$ alone.
For fun:
PlotACylinder[RadiusOfA_, Height_, theta_, u_] :=
{RadiusOfA*Cos[theta], RadiusOfA*Sin[theta], Height*u}
PlotATop[RadiusOfA_, Height_, theta_, u_] :=
{RadiusOfA*u*Cos[theta], RadiusOfA*u*Sin[theta], Height}
PlotX[RadiusOfX_, theta_, u_] :=
{RadiusOfX*u*Cos[theta], RadiusOfX*u*Sin[theta], 0}
PlotTopSurface[RadiusOfA_, RadiusOfX_, Height_, t_, theta_, u_] :=
Module[{x, y},
x = RadiusOfA + (RadiusOfX - RadiusOfA) u;
y = 2 Height*(1 - (2 RadiusOfX - 2 RadiusOfA)/(2 RadiusOfX - RadiusOfA - x))
+ Height*(2 RadiusOfX - 2 RadiusOfA)/(2 RadiusOfX - RadiusOfA - x);
{(x (1 - t) + RadiusOfA*t)*Cos[theta], (x (1 - t) + RadiusOfA*t)*Sin[theta],
Height (1 - t) + y*t}]
PlotSideSurface[RadiusOfA_, RadiusOfX_, Height_, t_, theta_, u_] :=
Module[{x, y},
y = Height*u;
x = (2 RadiusOfX - RadiusOfA)*(1 - (2 Height/(2 Height - y)))
+ RadiusOfX (2 Height/(2 Height - y));
{(RadiusOfX (1 - t) + x*t)*Cos[theta], (RadiusOfX (1 - t) + x*t)*Sin[theta],
y (1 - t)}]
PlotRetract[RadiusOfA_, RadiusOfX_, Height_, t_] := ParametricPlot3D[
{PlotACylinder[RadiusOfA, Height, theta, u],
PlotATop[RadiusOfA, Height, theta, u],
PlotX[RadiusOfX, theta, u],
PlotTopSurface[RadiusOfA, RadiusOfX, Height, t, theta, u],
PlotSideSurface[RadiusOfA, RadiusOfX, Height, t, theta, u]},
{theta, 0, 2 Pi}, {u, 0, 1}, Mesh -> None, Axes -> None,
Boxed -> False, PlotPoints -> 30,
Lighting -> {{"Directional", White, {{1, 1, 1}, {0, 0, 0}}}},
PlotStyle -> {Gray, Gray, Gray, Directive[Blue, Opacity[0.5]],
Directive[Blue, Opacity[0.5]]}]
Export["animation.gif", Table[PlotRetract[1, 3, 4, Max[0, t]],
{t, -0.1, 0.98, 0.02}], "DisplayDurations" -> {0.125}] |
H: How to show $\frac{a_1+\cdots+a_n}{n}\le S_h\sqrt[n]{a_1\cdots a_n}$
For positive numbers $a_i$ with $0<m\le a_i\le M$, how to show
$$\frac{a_1+\cdots+a_n}{n}\le S_h\sqrt[n]{a_1\cdots a_n},$$ where $h=M/m$ and $$S_h=\frac{(h-1)h^{\frac{1}{h-1}}}{e\log h}$$
AI: Let
$$
f(b_1,\dots,b_n)=\frac{b_1^n+\dots+b_n^n}{nb_1\dots b_n},\quad \text{for $b_i\in [\sqrt[n]{m}, \sqrt[n]{M}]$}
$$
$f$ is a convex function for each variable, therefore the maximal is attained on the boundary
$$
\max f=\max_{s+t=n} \frac{sm+tM}{n \sqrt[n]{m^s}\sqrt[n]{M^t}}\le \max_{r\in [0, 1]} rh^{r-1}+(1-r)h^r.
$$
It remains to consider the function $g(r)=rh^{r-1}+(1-r)h^r,\, r\in [0, 1]$.
The following is done with the help of http://www.wolframalpha.com/ (h=3, h=4, h=5, h=6). Since
$$g'(r)=h^{r-1}\big[(-hr+h+r) \log(h)-h+1 \big],$$
put $g'(r)=0$, we obtain
$$
\max g=S_h,
$$
completing the proof. |
H: What is the difference between a proposition and an observation?
As far as I know, a proposition is a statement which might be used to prove a theorem but is also of independent interest.
How would you differentiate it from an observation? Would you say it is a proposition with an easy proof?
AI: I agree with your characterization of "proposition". I think that by analogy with the physical world an observation in mathematical writing is any statement whose truth is apparent "by inspection". Usually this corresponds to having a short, straightforward proof by an obvious method.
Often the term "observation" it is used more loosely to mean a statement with a proof that the author desires to omit for some other reason. However, this usage has the disadvantage that it may frustrate the reader who does not immediately see why the statement is true.
I think that whether something can be called an observation or not is mostly separate from its role in the paper, unlike the theorem/proposition/lemma distinction. (Of course, a the author should not ask the reader to simply "observe" the truth of the main theorem!) |
H: The limit $\lim\limits_{n\to\infty} (\sqrt{n^2-n}-n)$. Algebraic and intuitive thoughts.
I am working on the following problem.
Find the limit of $$\lim_{n \to \infty} (\sqrt{n^2-n}-n)$$
Intuitively, I want to say it's $0$ because as $n \to \infty$, $\sqrt{n^2-n}$ behaves like $n$ and subtracting $n$ makes it $0$.
However, algebraically, to my surprise
$$\begin{align}
\lim_{n \to \infty} (\sqrt{n^2-n}-n) & = \lim_{n \to \infty} \frac{n}{\sqrt{n^2-n}+n}\\
& = \lim_{n \to \infty} \frac{1}{\sqrt{1-\frac{1}{n}}+1}\\
& = \frac{1}{2} \\
\end{align}$$
Is there any intuitive way to explain why 0 was not the answer ?
AI: You have a sign error: you should have $-\frac12$.
Complete the square: $n^2-n=\left(n-\frac12\right)^2-\frac14$, so for large $n$ its square root is very close to $n-\frac12$, and subtracting $n$ brings you very close to $-\frac12$. |
H: Finding an unbiased estimator for the negative binomial distribution
Consider a negative binomial random variable Y
as the number of failures that occur before the r
th success in a sequence of independent and identical success/failure trials. The pmf of $Y$
is $$nb(y;r,\theta)=\begin{cases}
{y+r-1 \choose y}\theta^{r}(1-\theta)^{y} & y=0,1,2,\dots\\
0 & \text{otherwise }
\end{cases}$$
Suppose that $r\geq2$
.
(a) Show that $\tilde{\theta}=\frac{r-1}{Y+r-1}$
is an unbiased estimator for $\theta$
; i.e. show that $E(\tilde{\theta})=\theta$
.
How woudl I proceed? How do I even calculate $E(\tilde{\theta})$? This might be a stupid question since the teacher hasn't covered this yet (i'm learning more quickly) but how would this thing work?
AI: Note that
$$E\left(\frac{r-1}{Y-r-1}\right)=\sum_{y=0}^\infty \frac{r-1}{y+r-1}\binom{y+r-1}{y} \theta^r(1-\theta)^y.$$
First we do a bit of binomial coefficient manipulation. In general
$$\binom{n}{k}=\frac{n}{k}\binom{n-1}{k-1}.$$
This can be proved easily by expressing binomial coefficients in terms of factorials. (There are also combinatorial proofs.)
Since $r\ge 2$, our expectation is
$$\theta\sum_{y=0}^\infty \binom{y+r-2}{y} \theta^{r-1}(1-\theta)^y.\tag{1}$$
The expression (1) is $\theta$ times a certain sum. We will be finished if we show that sum is equal to $1$. Note that
$$\binom{y+r-2}{y} \theta^{r-1}(1-\theta)^y$$
is the probability that the $r-1$-th success occurs on the $y+r-1$-th trial.
For we must choose $y$ places among the first $y-r-2$ to have failure. The probability we have failures in the chosen places and success in the remaining $r-2$ places is $\binom{y+r-2}{y}\theta^{r-2}(1-\theta)^y$. finally, multiply by $\theta$ to take account of the fact we have success in the $y+r-1$th place.
Summing from $y=0$ to $\infty$ therefore must yield $1$. |
H: Is polynomial $1+x+x^2+\cdots+x^{p-1}$ irreducible?
Let $p$ a prime number, is the polynomial
$$1+x+x^2+\cdots+x^{p-1}$$
irreducible in $\mathbb{Z}[x]$ ? Thanks in advance.
AI: Mr. Eisenstein certainly thinks so!
The polynomial can be rewritten as $\frac{x^p-1}{x-1}$. Setting $x=(y+1)$ has no effect on (ir)reducibility of the polynomial; doing so and applying the binomial theorem yields:
$$\frac{(y+1)^p-1}{(y+1)-1}=\frac{1}{y}\sum_{k=1}^{p}\binom{p}{k}y^{k}=y^{p-1}+\binom{p}{p-1}y^{p-2}+\ldots+\binom{p}{2}y^1+\binom{p}{1}y^0$$
Since all the coefficients apart from the leading one are divisible by $p$ and the constant term is too small to be divisible by $p^2$, we can apply the criterion and conclude that the polynomial is irreducible over rationals (and thus also over integers). |
H: Is the square root of $4$ only $+2$?
Why is $4^{1/2}=+2$?
It should also be $-2$ since both squared just give two only. Also why do we always represent root of $x$ on the right side of the number line?
AI: It is by convention: with real numbers, we agree to take the positive square root. This allows us to define $$f(x) = \sqrt x, \;\;x \in \mathbb R, \;\;x\geq 0, $$ so it is a true real-valued function: taking a square-root of a number greater than or equal to zero "returns" a unique real number (is hence a function). Without that convention $\sqrt x$ would fail to be be a function.
(Note: as imranfat suggests: I think you might be confusing the square root function with what we know about solving an equation $x^2 = 4$, which has two solutions, $x = \pm 2$. |
H: Finding the upper and lower limit of the following sequence.
$\{s_n\}$ is defined by $$s_1 = 0; s_{2m}=\frac{s_{2m-1}}{2}; s_{2m+1}= {1\over 2} + s_{2m}$$
The following is what I tried to do.
The sequence is $$\{0,0,\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{3}{8},\frac{7}{8},\frac{7}{16},\cdots \}$$
So the even terms $\{E_i\} = 1 - 2^{-i}$ and the odd terms $\{O_k\} = \frac{1}{2} - 2^{-k}$ and each of them has a limit of $1$ and $\frac{1}{2}$, respectively.
So, the upper limit is $1$ and the lower limit is $1\over 2$, am I right ?
Does this also mean that $\{s_n\}$ has no limits ?
Is my denotation $$\lim_{n \to \infty} \sup(s_n)=1 ,\lim_{n \to \infty} \inf(s_n)={1 \over 2} $$ correct ?
AI: Shouldn't it be $E_i = \frac{1}{2} - 2^{-i}$ and $O_i = 1 - 2^{1-i}$? That way $E_i = 0, \frac{1}{4}, \frac{3}{8}...$ and $E_i = 0, \frac{1}{2}, \frac{3}{4}...$, which seems to be what you want.
Your conclusion looks fine, but you might want to derive the even and odd terms more rigorously. For example, the even terms $E_i$ are defined recursively by $E_{i+1} = s_{2i+2} = \frac{s_{2i+1}}{2} = \frac{E_1 + \frac{1}{2}}{2}$, and $\frac{1}{2} - 2^{-i}$ also satisfies this recursion relation. $E_1 = 0$, and $\frac{1}{2} - 2^{-1} = 0$, hence they have the same first term. By induction the two sequences are the same.
If we partition a sequence into a finite number of subsequences then the upper and lower limit of the sequence are equal to the maximum upper limit and minimum lower limit of the subsequences; in this case you're partitioning into even and odd terms.
$\{s_n\}$ has a limit iff the upper and lower limits are the same (this is proved in most analysis books), so in this case $\{s_n\}$ has no limits. |
H: The security guard problem
There was a security guard in a bank. In front of him were 100 lockers in rows of 10. He thought of something as he saw all the lockers were closed. He started opening all the lockers whose lock numbers were multiples of 1, then closed all lockers whose number was a multiple of 2,and did the same for 3 and so on. If the locker was open, he closed it, if closed, opened it. He went to open multiples of 1 (all),then 2,4,6,8...and then 3,6,9.. And then finally reached 100.
After all the opening and closing,in the end, how many lockers are open and how many closed?
AI: Hint: How many divisors does $7$ have? How many divisors does $8$ have? How many divisors does $9$ have? What's special about $9$ and why?
Further Hint: If $a$ divides $n$, then $n = ab$ by definition. For example, $2$ divides $8$ because $8 = 2 \cdot 4$. In this way, divisors arise naturally in pairs.
If the numbers appearing in these pairs are all distinct, then there will be an even number of divisors. For example, the divisors of $8$ (in pairs) are $1$ and $8$ and also $2$ and $4$.
What about when the numbers in the pair aren't distinct? This can only happen if $n = a \cdot a$ for some $a$ (that is, if $n$ is a perfect square). For example, the divisors of $9$ (in pairs) are $1$ and $9$ and also $3$ and $3$. We don't count the $3$ twice, of course, so we get an odd number of divisors. |
H: What am I doing wrong with solving $2\tanh^2x-\text{sech}~x=1$?
$2\tanh^2(x)-\text{sech}(x)=1$
$\tanh^2(x)=1-\text{sech}^2(x)$
$2(1-\text{sech}^2(x))-\text{sech}(x)=1$
$2\text{sech}^2(x)+\text{sech}(x)-1=0$
$\text{sech}(x)=\frac{1}{2} $ Not possible. And $\text{sech}(x)=-1$ Also not possible
What am I doing wrong?
AI: The equation $\text{sech} x=\frac{1}{2}$ has a solution.
Rewrite as $\cosh x=2$, and then as $e^x+e^{-x}=4$, and then as $e^{2x}-4e^{x}+1=0$. We get a reasonably pleasant quadratic in $e^x$. |
H: Sets and bijection
I am asked to do determine if the following set is countable and if so, perform a bijection with the Natural set of numbers.
The set is: all bit strings not containing the bit $0$
I have determined that this is a countable set. Performing the bijection is the part I am unsure of.
My first thought is to write $\{1, 11, 111, 1111, 11111, \ldots\}$ but is this actually a valid answer? Any suggestions on what technique I would use to show a valid answer?
AI: You're on your way:
Note that given the set you are constructing, we have:
$$A = \{\underbrace{1}_{\large n = 1}, \underbrace{11}_{\large n = 2}, \underbrace{111}_{\large n = 3}, \underbrace{1111}_{\large n = 4}, \underbrace{11111}_{\large n = 5}, \underbrace{\cdots}_{\large \ldots} \}$$
Now, can you find a way to construct, or define, a bijective function $f: A \to \mathbb N$ (or equivalently, a bijective function $g: \mathbb N \to A$)? |
H: Showing reflection in a plane containing origin is represented by symmetric orthogonal matrix
I'm sure this question is pretty straightforward but I've been scratching my head at this for a small while
Assuming that any isometry $T: \mathbb{R}^3 \rightarrow \mathbb{R}^3$ may be written in the form $T({\bf{v}}) = P{\bf{v}} + {\bf{q}}$ where ${\bf{q}} \in \mathbb{R}^3$ and $P$ is a $3 \times 3$ orthogonal matrix show that a reflection in a plane containing the origin is represented by a symmetric orthogonal matrix.
I've been trying to use the fact that if we've got some ${\bf{v}} \in \mathbb{R}^3$ then we can write reflection as $T({\bf{v}}) = P{\bf{v}}$ (for some reason I feel like the constant vector isn't needed for reflection) - then since we're referring to reflection we must have
$TT({\bf{v}}) = T^2({\bf{v}}) = P^2{\bf{v}}= I{\bf{v}}$ (essentially saying reflection twice will give us the original vector)
We know that $P$ is orthogonal, so $P^TP = I$, then using $P^2 = I$ we get $P^TP = P^2 \implies P^T = P$ hence it is a symmetric orthogonal matrix.
Is this correct? Thanks for any help.
AI: Looks ok to me. Vector $q$ equals $0$ because $T(0)=P(0)+q$ i.e. $0=0+q$. |
H: Is this the best solution (or a correct one) to this recurrence relation.
The $r_k$'s are terms of a sequence of positive integers satisfying $\sup r_k=\infty$. I am looking for a solution $F_k$ to
$$F_{k}\ge 2r_{k+1} F_{k+1}-F_{k+1}\ (k\ge1);\ F_1=1 \ (*)$$
(I want $F_{k}$ to be the smallest possible solution to $(*)$). Equality if possible.
Take $F_{k}=q^{k}, q\not=0.$ Then we have $$q^{k}-2r_{k+1}\cdot q^{k+1}+q^{k+1}\ge 0,$$ $$q^{k}(1-2r_{k+1}\cdot q+q)\ge 0$$ Therefore, since $q\not =0$,
$$1-2r_{k+1} q+q\ge0 $$
Solving for $q$ gives $$q\le \frac{1}{2r_{k+1}-1}$$
Therefore, $F_{k}\le (\frac{1}{2r_{k+1}-1})^{k}.$
Since $(*)$ isn't linear and we're dealing with an inequality, how do I know if I have a general enough solution of $(*)$?
AI: Your inequality rearranges to $$\frac{F_k}{2r_{k+1}-1}\ge F_{k+1}, k\ge 1$$
Hence $$F_2\le F_1 \left(\frac{1}{2r_2-1}\right)$$
and $$F_3\le F_2\left(\frac{1}{2r_3-1}\right)\le F_1 \left(\frac{1}{2r_3-1}\right)\left(\frac{1}{2r_2-1}\right)$$
Hence you can prove by induction that $$F_n\le F_1\prod_{i=2}^n\left(\frac{1}{2r_i-1}\right)$$
(and equality can be achieved) |
H: Trying to understand implication
I'm currently slogging through propositional calculus and making my brain do impressions of a pretzel, but I'm slowly getting it though I'd like to see if that's actually true for the problem below. Is my answer correct for the below proposition?:
(p $\rightarrow$ p) $\lor$ (q $\rightarrow$ q) $\rightarrow$ p $\lor$ q
My p and q truth table is oriented with 0 and 0, and at the bottom is 1 and 1. My final result came to:
(p $\rightarrow$ p) $\lor$ (q $\rightarrow$ q) $\rightarrow$ p $\lor$ q
0
1
1
1
Sorry for the awful formatting, but I'm having trouble trying to whip up a truth table on this. Hopefully you get the idea? In
AI: Your table for $$[(p \implies p) \lor (q \implies q)]\implies (p \lor q)$$
is equivalent to proposition: $\;p\lor q$, which has precisely the same truth-table assignment as does your statement.
$\qquad\qquad\qquad\qquad\qquad$
Both of the Or'ed statements $(p \implies p),\;(q \implies q)$ are tautologies (true regardless of the truth values of p, and of q), and so also is, the disjunction $$(p \implies p) \lor (q \implies q)$$ It is true no matter what the truth values of $p$ and $q$. So the only way your statement can be false is if the consequence of the main implication is false: the consequence of the main implication is the disjunction $(p\lor q).\;$ And $\;p \lor q\;$ is false if and only if $p$ is false and $q$ is false. |
H: On the definition of graded Betti numbers
Let's use as reference the slides 19-31.
Let $S=k[x_1,\dots,x_n]$ and $M$ a finitely generated graded $S$-module. Then by Hilbert's Syzygy Theorem, $M$ has a minimal, graded, free resolution of length at most $n$, i.e.,
$$0 \rightarrow F_m \rightarrow F_{m-1} \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow M \rightarrow 0,$$ where $F_i$ is free, graded $S$-module and $m \le n$.
Towards introducing the graded Betti numbers, let $F_i = \bigoplus_{d} (F_i)_d$, where $(F_i)_d$ is the homogeneous component of degree $d$. Now, $(F_i)_d$ is a finite-dimensional $k$-vector space.
Here is the part that confuses me: at slide 20, the author says "we give to each generator of $F_i$ a degree" and subsequently writes $F_i = \bigoplus_{j \in \mathbb{Z}} S(-j)^{\beta_{i,j}}$, and he mentions that $S(-j)$ is cyclic free module.
Question 1. I can not understand what it means to give a degree to each generator (i don't see why every generator has to be inside some homogenous component, in which case we could associate a degree to each generator).
Question 2. What exactly is $S(-j)$? Initially, i thought that $S(-j)$ was the homogeneous component of $S$ of degree $-j$, however, this is not a cyclic module as the author claims at slide 27.
Any examples will be highly appreciated.
AI: First, Question 2: For any graded object $M=\bigoplus M_d$ and integer $j$, one gets a new graded object $M(-j)$ with grading obtained by shifting that of $M$: the degree $d$ piece of $M(-j)$ is by definition $M(-j)_d=M_{d-j}$.
Now for Question 1, since the module $F_i$ is free it can be written as a direct sum
$$F_i=S^{\oplus \beta_i}$$ of some number of copies of the free module of rank one $S$. But to make the maps in your resolution graded of degree zero (a map $\phi:M \rightarrow N$ is graded of degree $0$ if $\phi(M_d)\subseteq N_d$), you may have to shift the gradings on the summands $S$ that appear. Equivalently, you may have to give the generator $1 \in S$ a non-zero degree, in order to map it onto something with non-zero degree---but this is just a book-keeping device, as I hope will become clear.
I'll give two examples now, which are the first two everyone learning this stuff for the first time should work out:
First, assume $n=1$ and $M=k=k[x]/(x)$ is the trivial module concentrated in degree $0$. There is a free resolution
$$0 \rightarrow S \rightarrow S \rightarrow M \rightarrow 0$$ of length $1$, where the map from $S$ to itself is multiplication by $x$. In order to make this a degree $0$ map of graded modules, one must shift the grading, replacing the left-hand $S$ by $S(-1)$ (since the degree $1$ piece of $S(-1)$ is the constants $k$, which get mapped onto constant multiples of the degree $1$ element $x \in S$, this makes good sense). So this resolution is usually written
$$0 \rightarrow S(-1) \rightarrow S \rightarrow k \rightarrow 0.$$
The second example to check out is again $M=k$, but this time with the number of variables $n=2$. This time there is a resolution of length $2$,
$$0 \rightarrow S(-2) \rightarrow S(-1)^{\oplus 2} \rightarrow S \rightarrow k \rightarrow 0,$$ where the map $S(-1)^{\oplus 2} \rightarrow S$ is given by $(f,g) \mapsto xf+yg$ and the map $S(-2) \rightarrow S(-1)^{\oplus 2}$ is $f \mapsto (yf,-xf)$.
This fits into a family of examples (one for each $n=1,2,3,...$) called the Koszul resolutions. These are in some sense the simplest nontrivial free resolutions. |
H: Does introducing penalties for getting true/false questions incorrect result in higher skill penetration (less luck/variance)?
A student is asked to answer 50 true/false questions and he would get 35 right and 15 incorrect if he had to put his best guesses for each question down.
Now, for each question he has a certain confidence of getting the problem correct and if we start imposing penalties for getting a question wrong, he can easily adapt by only answering questions with a confidence level tantamount to the relation [point penalty if wrong]/[point reward if right], and leave all the other one's blank.
Is there a way for the test designer to design a reward/penalty scheme that maximizes skill penetration and reduces the role of luck and, if so, what factors does the designer have to look out for?
Assume that the test questions have increasing difficulty in a fashion that we cover the full spectrum of confidence on the student.
EDIT: Another idea would be to let the student gamble with points. Also setting a limit on each question and a global gambling limit.
Is there any reasonable scheme to reduce the role of luck in general on true and false questions?
AI: To maximize skill penetration, award 1 point for each correct answer, and -1000 for each incorrect answer. Note: this is not a joke, this is how to only get answers that the student is entirely confident of.
In your example, the student's knowledge is net zero, getting the same score as random guessing. Such a student is unlikely to adapt his or her test-taking strategy to subtle rewards and penalties.
Additional commentary regarding gambling: Suppose you let each student give a weight to each question, of any real number in $[0,1]$, as well as the T/F answer. If right, the student gains that many points, if wrong, loses. Naively one might think that this would encourage students to weigh questions more if they are more confident, and less otherwise. In fact this is not true. If I am 51% confident that the answer is true, I will maximize my score by weighing it 1 rather than anything else.
Suppose instead you allow students to weight as above, but this time if they are wrong they lose DOUBLE their wager. If I am right with probability $p$ my expected score is $pw-2(1-p)w=w(3p-2)$. All this change does is move the threshhold from $p=\frac{1}{2}$ to $p=\frac{2}{3}$. If I believe that $p>\frac{2}{3}$, I should bet the maximum, otherwise I should bet 0.
Result: under all similar gambling schemes, there is never any reason to bet anything other than the maximum (or zero/leave the question blank). |
H: How to prove $\sqrt{x + y} \le \sqrt{x} + \sqrt{y}$?
How do I prove $\sqrt{x + y} \le \sqrt{x} + \sqrt{y}$? for $x, y$ positive?
This should be easy, but I'm not seeing how. A hint would be appreciated.
AI: For positive $x, y$, we have:
$$\sqrt{x + y} \leq \sqrt{x} + \sqrt{y} \iff \left(\sqrt{x + y}\right)^2 \leq \left(\sqrt x + \sqrt y\right)^2 \iff \color{blue}{\bf x + y \leq x + y + 2\sqrt{xy}}$$
What can you conclude about the leftmost "inequality", given its equivalence to the $\color{blue}{\bf rightmost\;inequality}$? |
H: Big-Oh Notation
I'm given to the following relationship:
$$C(x) = C(\lfloor(\frac x2)\rfloor) + x, C(1)=2$$
I do not understand how my teacher says to calculate big O. Any help to start?
AI: HINT: Put $x=2^n$ and solve to find the series. |
H: Give example of a distribution.
Give examples of distribution
(1) such that $X$ and $1-X$ have the same distribution.
(2) such that $X$ and $\dfrac1X$ have the same distribution.
For the first one I think $X$ is $\text{Uniform}(0,1)$. Since $1-X$ is also $\text{Uniform}(0,1)$. I don't know the second distribution. Please help.
AI: For the second problem, we can cheat, and let $X=-1$ with probability $p$, and $X=1$ with probability $1-p$. The random variables $X$ and $\dfrac{1}{X}$ not only have the same distribution, they are the same random variable.
Now that we are cheating, let's go all the way. Let $X=1$ with probability $1$.
For cheating a little less, let $X$ take on the values $\dfrac{1}{2}$ and $2$ each with probability $\frac{1}{2}$. Then $X$ and $\dfrac{1}{X}$ are not the same random variable, but they have the same distribution.
A continuous distribution is more challenging. But for example a random variable $X$ with density function $f_X(x)=\dfrac{1}{x}$ for $e^{-1/2}\le x\le e^{1/2}$ and $f_X(x)=0$ elsewhere has the desired property. |
H: Real sample data source
I'm about to start working on a final project for a college level statistics course. The problem is that I have to analyse real data and so far I haven't got many options. Do you know about any website or perhaps books where I can find data used in previous statistical surveys or something?
AI: I guess this should be more than enough
http://archive.ics.uci.edu/ml/index.html |
H: If $\alpha= (1+\sqrt{-19})/2$ then any ring homomorphism $f : \mathbb{Z}[\alpha] \rightarrow \mathbb{Z}_3$ is the zero map
This is from a past qualifying exam.
Here is the question:
If $\alpha= (1+\sqrt{-19})/2$ then any ring homomorphism $f : \mathbb{Z}[\alpha] \rightarrow \mathbb{Z}_3$ is the zero map.
Here is what I thought of: $\mathbb{Z}_3$ is a field. So I was thinking of using something like $ \mathbb{Z}[\alpha]/(f(\alpha))$ is isomorphic to the field $\mathbb{Z}_3$. If I can somehow show that $f(\alpha)$ is the root of some irreducible polynomial over $\mathbb{Z_3}$ then I will have what I want? I also
have that $\alpha(1-\alpha)=5$. So If I apply $f$ to this relation and then using the fact that $f$ is ring homomorphism I get $f(\alpha)\cdot (f(1)-f(\alpha))=f(5)$. I also know that $f(1)=1$. How do I proceed from here? Am I on the right track?
Can anybody please help me on this? Your time and answers are always appreciated.
Thanks.
AI: Using @Lubin's hint.
Discuss what $f(\alpha)$ would be? If $f(1)=1$,then
$f(\alpha)=0$? then $f(\alpha)(f(1)-f(\alpha))=f(5)$, so $0\cdot(1-0)=5=2$ !?
$f(\alpha)=1$? then $1\cdot(1-1)=5$ !?
$f(\alpha)=2$? then $2\cdot(1-2)=5$ !? |
H: How can I construct a matrix?
Construct a matrix whose one eigenvector is $(1,-1,1,-1)$.
Here only one eigenvector is given and I need to construct a matrix with this. I don't know how to proceed. Please help.
AI: Short answer -- Every nonzero vector is an eigenvector of the identity matrix.
Long answer -- For a nontrivial example, consider
$$
A = P\pmatrix{1&1\\ &\ddots&\ddots\\ &&\ddots&1\\ &&&1}P^{-1}
$$
where $P$ is any invertible matrix with $(1,-1,1,-1)^T$ as it first column. (Use $A^T$ if you need $(1,-1,1,-1)$ to be a row eigenvector.) |
H: What is the name for the region enclosed by an $n$ dimensional object?
In a $1$ dimensional object, the name for the region enclosed by it is the length of the object. In a $2$ dimensional object, the name for the region is the area of the object. In a $3$ dimensional object, the name is the volume of the object. What is the name in a $4$ dimensional object? Hypervolume? In general, what can we call the name of this region in a $n$ dimensional object? ("the region enclosed by this $n$ dimensional object" seems too long and wordy).
AI: I would say that just volume is actually a fine choice for any dimension $\geq 3$, but if you really want to emphasize the relevant dimension, you could say $n$-volume or $n$-hypervolume. Take a look at the Wikipedia page on Lebesgue measure.
I think good catch-all terms (i.e., for dimensions $1$ and $2$ as well) would be measure or content. |
H: Clarification on 2 'E's for expected value in a conditional probability
The text I am reading defines the Expected Prediction Error as the squared difference between the actual Y value and the predicted Y value (f(X) in the text). Then it conditions on X. The trouble I'm having is understanding the notation of the formula, because it has 2 E symbols with subscripts for X and Y|X. Here is the formula:
$$EPE(f) = E_XE_{Y|X} ([Y - f(X)]^2 | X)$$
Are the E_x and E_y|x to clarify the conditioning? Or do the 2 multiply or some other type of operation? If describing the point of the equation in words, as the expected value with a single E, I suppose it would be 'what is the expected difference between Y and the predicted value of Y, for a given X?', which would be a question to ask since the error could change over different values of X.
For the full context, it is on p. 18 of 'The Elements of Statistical Learning', from http://www-stat.stanford.edu/~tibs/ElemStatLearn/
If more info is needed, I'll be happy to add it.
AI: Actually,
$$E([Y-f(X)]^2)=E_X\{E_{Y|X} ([Y - f(X)]^2 | X)\} \; .$$
This is known as the law of total expectation. |
H: How to show that there is $x_{0} \in X$ such that $f_{n}(x_{0}) \notin \mathbb Q$ for every $n$. $X$ is a Banach space
Let $X$ be a Banach space. $\{f_{n}\}$ is a sequence of nonzero bounded linear
functionals on $X$. Show that there is $x_{0} \in X$ such that $f_n(x_0) \notin \mathbb Q$ for
every $n$.
This is a problem of the assignment of functional analysis. I have no idea how to prove the conclusion. I'm grateful to any inspiring replies!
AI: Enumerate the rationals as $q_n, n \in \mathbb{N}$. For every pair $(n,k)$ we consider $F_{n,k} = f_n^{-1}[\{q_k\}]$, which is a closed (by continuity of all $f_n$) and nowhere dense (this uses linearity) subset of $X$. So their union cannot be $X$ by the Baire category theorem (completeness of $X$), any $x_0 \notin \cup_{n,k} F_{n,k}$ will do. |
H: Prove this vector identity using vector identities
Let $f$, $g$ and $h$ be any $C^{2}$ scalar functions. Using the standard identities of vector calculus, prove that;
$$ \nabla \cdot \left( f\nabla g \times \nabla h \right) = \nabla f \cdot \left(\nabla g \times \nabla h \right)$$
Here is my working out so far;
using identity 8
$$ \nabla \cdot \left( f\nabla g \times \nabla h \right) = \nabla h \cdot \left(\nabla \times f\nabla h \right) - f\nabla g \cdot \left( \nabla \cdot \nabla h \right) $$ and the div of a scalar is a vector hence $$ = \overrightarrow H \cdot \left(\nabla \times f\nabla \overrightarrow G \right) - f\nabla \overrightarrow G \cdot \left( \nabla \cdot \overrightarrow H \right) $$ and then using vector identity 10 gives me $$= \overrightarrow H \cdot \left(f\nabla \times \overrightarrow G +\nabla f \times \overrightarrow G\right) - f\nabla \overrightarrow G \cdot \left( \nabla \cdot \overrightarrow H \right) $$ and now I don't know whats next?
Here are the vector identities listed below
AI: HINT: Use identity 7 with $\mathbf{F}=\nabla g\times\nabla h$. |
H: how to show that $Y= \{f\in L^{2}[0,1] \mid f(x)\geq x \text{ a.e.}\}$ is weakly closed in $L^{2}$
Problem: Let $Y= \{f\in L^{2}[0,1] \mid f(x)\geq x \text{ a.e.}\}$. Show that
$Y$ is weakly closed in $L^{2}$.
My thought about solving this problem is that consider a sequence $\{f_{n}\}$ which converges to some $f$ weakly and to show that $f$ is also in the set $Y$. According to the definition of weakly convergence, $f_n \stackrel w\longrightarrow f$ if and only if for any $T\in (L^2)^*, T(f_{n})\longrightarrow T(f)$, and by Riesz Representation Theorem, we can express $T$ in terms of some $g$ in $L^2$. But I have no idea about how to proceed. I am new in functional analysis and I will be grateful to any inspiring replies. Thanks!
AI: Let $f_0(x):=x$, and $S=\{f\in L_2[0,1]:f\geq 0\quad\text{ a.e.}\}$.
For each $g\in L_2[0,1]$ consider functional
$$
\phi_g:L_2[0,1]\to\mathbb{R}:f\mapsto\langle f,g\rangle
$$
One can show that
$$
f\geq f_0 \quad\text{ a.e.}\Longleftrightarrow \int_0^1 f(x)g(x)dx\geq \int_0^1 f_0(x)g(x)dx\quad\text{ for all }\quad g\geq 0\quad\text{ a.e.}
$$
Hence
$$
f\in Y\Longleftrightarrow \phi_g(f)\geq \phi_g(f_0)\quad\text{ for all }\quad g\in S
$$
This means that
$$
Y=\bigcap\limits_{g\in S}(\phi_g)^{-1}\left([\phi_g(f_0),+\infty)\right)
$$
Note that $(\phi_g)^{-1}\left([\phi_g(f_0),+\infty)\right)$ is weakly closed as preimage of closed set $[\phi_g(f_0),+\infty)$ under action of continuous linear functional $\phi_g$. Thus $Y$ is weakly closed as intersection weakly closed sets. |
H: Question about a solution to a problem involving Taylor's theorem and local minimum
I've been studying "Berkeley Problems in Mathematics, Souza, Silva" and I came across this problem:
Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a $C^{\infty}$ function. Assume that
$f(x)$ has a local minimum at $x = 0$. Prove there is a disc centered on the
$y$ axis which lies above the graph of $f$ and touches the graph at $(0, f(0))$.
We use Taylor's theorem:
there is a constant $C$ such that $|f(x) - f(0) - f’(0)x| \le Cx^2$ and we assume that $|x| < 1$.
Why is that?
I know that if a function has a local minimum at $0$, it means that in a certain neighbourhood its values cannot be less than $f(0)$.
Will anything bad happen if we instead assume that $|x|<\delta<1$ ?
Please help me. I see it's a crucial step in the solution of this problem.
http://thor.info.uaic.ro/~fliacob/An1/2012-2013/Concursuri/SEEMOUS-2013/Baza%20de%20documentare/Souza,%20Silva%20-%20Berkeley%20Problems%20In%20Mathematics%20(440S).pdf
question: Problem
1.4.26
page 24 , solution: page 177
AI: Suppose that $|f''(x)|\le M$ for $|x|\le\epsilon$. Since $f'(0)=0$, we have that for $|x|\lt\epsilon$,
$$
\left|\frac{f'(x)-f'(0)}{x-0}\right|=|f''(\xi)|\le M
$$
for some $\xi$ between $0$ and $x$; that is, for $|\xi|\le\epsilon$.
Thus, $|f'(x)|\le M|x|$ and integrating yields $|f(x)-f(0)|\le\frac M2x^2$ for $|x|\le\epsilon$.
The bottom of the circle of radius $r$ centered at $f(0)+r$ is
$$
g(x)=f(0)+r-\sqrt{r^2-x^2}
$$
so that $g(0)=f(0)$ and
$$
\begin{align}
g(x)-f(0)
&=\frac{x^2}{r+\sqrt{r^2-x^2}}\\
&\ge\frac1{2r}x^2
\end{align}
$$
Thus, if we set $r\le\min\left(\frac1M,\epsilon\right)$, the circle of radius $r$ centered at $f(0)+r$ satisfies the requirements. That is,
$$
g(x)\ge f(0)+\frac1{2r}x^2\ge f(0)+\frac M2x^2\ge f(x)\\
g(0)=f(0)
$$
and the circle only extends to $|x|\le r\le\epsilon$. |
H: If $X$ is a random variable, then $X^{\pm}$ are random variables
For $x\in\mathbb{R}$, let $x^+:=\max\{0,x\}$ be its positive part, $x^-:=-\min\{0,x\}$ be its negative part. Prove that if $X$ is a random variable, then $X^{\pm}$ are random variables too.
I have attempted to prove this by using definition of a random variable, hence I tried to show $\forall t\in\mathbb{R}$, $\{X^+\le t\}$ and $\{X^-\le t\}$ are events. But somehow when I tried to evaluate them I got confused myself, especially when they are divided into cases for $x<0$ and $x\ge 0$ but $X^+$ and $X^-$ themselves are already its positive and negative part.
Here is how I wrote it:
$$\{X^+\le t\}= \begin {cases} ?&\text{if }x < 0;\\
?&\text{if }x \ge 0. \end {cases}$$
$$\{X^-\le t\}= \begin {cases} ?&\text{if }x < 0;\\
?&\text{if }x \ge 0. \end {cases}$$
Would anyone kindly help me to make this confusion clear? How can I complete the evaluation of $\{X^+\le t\}$ and $\{X^-\le t\}$?
Thanks.
Edit:
Thank you @user64480 for pointing out the mistake and providing the solution.
Here is the correct one:
$$\{X^+\le t\}= \begin {cases} \emptyset,&\text{if }t < 0;\\
\{X\le t\},&\text{if }t \ge 0. \end {cases}$$
$$\{X^-\le t\}= \begin {cases} \emptyset,&\text{if }t < 0;\\
\{X\ge -t\}&\text{if }t \ge 0. \end {cases}$$
AI: Your cases should depend on whether $t$ is positive or negative. Remember, in the expression $\{X^+\le t\}$, we think of $t$ as fixed.
If $t< 0$, then $\{X^+\le t\}$ is empty since $X^+$ is nonnegative. The empty set is an event.
If $t\ge 0$, then $\{X^+\le t\}=\{X\le t\}$. This is because if $X^+\le t$, then $\max(0,X)\le t$, so in particular $X\le t$. Conversely, if $X\le t$, then, since we also assumed $0\le t$, it follows that $X^+=\max(0,X)\le t$. Thus the two sets are indeed equal. But $\{X\le t\}$ is an event since $X$ is a random variable.
Try to do a similar analysis for $X^-$. |
H: Integral of $\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$
I'm trying to find the $F(x)$ of this function but I don't find how to do it, I need some hints about the solution.
I know that $\sin(2x) = 2\sin(x)\cos(x)$ its help me? It's good way to set $2x$ as $t$?
$$\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$$
EDIT
Its right to do it like that?
$$\int \frac{1+2\sin(x)\cos(x)dx}{\frac{2\sin(x)\cos(x)}{\cos^2(x)-\sin^2(x)}}$$
$$\int \left(\frac{\cos^2(x)-\sin^2(x)}{2\sin(x)\cos(x)}+\cos^2(x)-\sin^2(x)\right)dx = \int \left(\operatorname{ctg}(2x)+\frac{1+\cos(2x)}{2}-\frac{1-\cos(2x)}{2}\right)dx$$ and the integral of $\displaystyle \operatorname{ctg}(2x) = \frac{\ln|\sin(2x)|}{2}+C$ and the two other is equal to $\displaystyle \frac{\sin(2x)}{2}+C$
Thanks
AI: Effective hint:
Let $\int R(\sin x,\cos x)dx$ wherein $R$ is a rational function respect to $\sin x$ and $\cos x$. If $$R(-\sin x, -\cos x)\equiv R(\sin x, \cos x) $$ then $t=\tan x, t=\cot x$ is a good substitution. |
H: Finding the number of symmetric, positive definite $10 \times 10$ matrices having...
I was looking at old exam papers and I was stuck with the following problem:
What is the number of symmetric, positive definite $10 \times 10$ matrices having trace equal to $10$ and determinant equal to $1$ ? The options are:
$0$
$1$
greater than $1$ but finite
Infinite.
After getting positive feedback,I am posting my attempt as a separate answer.
AI: All what you did is perfect just you can add that the only matrix similar to the identity matrix is itself. |
H: Fourier Transform from Discrete Fourier Transform
If I have the basic Discrete Fourier Transform from a discrete function $x[n]$, like this:
$$\displaystyle X[k] = \sum\limits_{n=0}^{N-1} x[n]e^{-j\frac{2\pi}{N}kn} $$
How can I get to the expression for the Fourier Transform:
$$\displaystyle X(j\Omega) = \sum\limits_{n=-\infty}^{+\infty} x[n]e^{-j\Omega n} $$
I understand that it follows when $N\rightarrow \infty$ but When you simply substitute that, you get limits from $0 \mbox{ to } \infty$ instead of $-\infty \mbox{ to } \infty$, which si needed for Fourier Transform.
How can I get around this?
P.S. The function $x$ can be of length $L$, and $N\geq L$, with func $x$ appended with zeros for values of n $L\leq n \leq N-1$, but that isn't so important.
AI: For the Discrete Fourier Transform (DFT) the signal $x[n]$ needs to be of finite length. This is not a very serious restriction because $N$ can of course be chosen arbitrarily large. If the indices are then chosen such that $x[n]$ is zero for $n<0$ and $n\ge N$ then the Fourier Transform of $x[n]$ can be evaluated at discrete frequencies $\Omega_k=\frac{2\pi k}{N}$ using the DFT. A denser frequency grid can be easily computed by zero-padding $x[n]$, i.e. by appending zeros to $x[n]$ and computing the DFT for a larger value of $N$.
EDIT: If we extend $x[n]$ periodically, we have
$$ X[k] = \sum\limits_{n=0}^{N-1} x[n]e^{-j\frac{2\pi}{N}kn}=
\sum\limits_{n=-N/2}^{N/2-1} x[n]e^{-j\frac{2\pi}{N}kn} $$
for even $N$. Now you can take the limit $N\rightarrow\infty$ if you make sure that the limit exists, i.e. if $\sum_{n=-\infty}^{\infty}|x[n]|$ is finite. |
H: Solve the following linear first-order equation
Question:
Solve the following linear first-order equation.
$(1+e^x)y '+e^xy=0$
I resolved:
$a_0(x)\acute{y}+a_1(x)y=g(x) => \acute{y}+p(x)y=Q(x)$
$\acute{y}+\frac{{e}^{x}}{1+{e}^{x}}y=0 , Q(x)=0, P(x)=\frac{{e}^{x}}{1+{e}^{x}}$
Integral factor in building:
$\mu (x)=exp\int P(x)d\acute{x}=exp\int\frac{{e}^{x}}{1+{e}^{x}} dx=exp\int\frac{e^x}{u}*\frac{du}{e^x}$
$1+e^x=u \rightarrow e^xdx=du \rightarrow dx=\frac{dx}{e^x} \rightarrow exp\int \frac{du}{u}=exp(ln\left|u\right| )=u-1+e^x$
$(1+e^x)\acute{y}+e^xy=0 \Rightarrow d((1+e^x)y)=0 \rightarrow \int d((1+e^x)y)=0 \rightarrow 1+e^xy=0 \rightarrow y=0$
Is my solved correctly?
Thanks for any help :)
AI: notice:$$(1+e^x)\acute{y}+e^xy=0 \Rightarrow d((1+e^x)y)=0 \rightarrow \int d((1+e^x)y)=c \rightarrow 1+e^xy=c\rightarrow y=\frac{c}{1+e^x}$$ |
H: A challenging logarithmic integral $\int_0^1 \frac{\log(1+x)\log(1-x)}{1+x}dx$
While playing around with Mathematica, I found that
$$\int_0^1 \frac{\log(1+x)\log(1-x)}{1+x}dx = \frac{1}{3}\log^3(2)-\frac{\pi^2}{12}\log(2)+\frac{\zeta(3)}{8}$$
Please help me prove this result.
AI: Use your favorite program to compute the indefinite integral in terms of polylogarithms
$$\int\frac{\ln(1+x)\ln(1-x)\,dx}{1+x}=\frac{\ln2}{2}\ln^2(1+x)-\ln(1+x)\,\mathrm{Li}_2\left(\frac{1+x}{2}\right)+\mathrm{Li}_3\left(\frac{1+x}{2}\right).$$
[This can be verified by straightforward differentiation].
To compute the definite integral, it suffices to know $\mathrm{Li}_{2,3}\left(\frac12\right)$ and $\mathrm{Li}_{2,3}(1)$. However, the definition of polylogarithm immediately implies $\mathrm{Li}_s(1)=\zeta(s)$. Also, the values $\mathrm{Li}_{2,3}\left(\frac12\right)$ can be found here (formulas (16), (17)). |
H: On Polar Sets with respect to Continuous Seminorms
In the following, $X$ is a Hausdorff locally convex topological vector space and $X'$ is the topological dual of $X$. If $p$ is a continuous seminorm on $X$ then we shall designate by $U_p$ the "$p$-unit ball", i.e,
$$U_p=\{x\in X: p(x)\le 1\}.$$
The polar set of $U_p$ is given by
$$U_p^o=\{f\in X':|f(x)|\le 1\quad \forall x\in U_p\}.$$
How do we prove that for each $x\in X$, we have
$$p(x)=\sup\{|f(x)|:f\in U_p^o\}.$$
I need some help...Thanks in advance.
AI: Assume that $p(x)=0$. Then for all $\lambda>0$, $\lambda x\in U_p$ hence $|f(\lambda x)|\leqslant 1$ and $f(x)=0$ whenver $f\in U_p^0$.
If $p(x)\neq 0$, then considering $\frac 1{p(x)}x$, we get $\geqslant$ direction. For the other one, take $f(a\cdot x):=a\cdot p(x)$ for $a\in\Bbb R$; then $|f(v)|\leqslant p(v)$ for any $v\in\Bbb R\cdot x$. We extend $f$ by Hahn-Banach theorem to the whose space: then $|f(w)|\leqslant p(w)$ for any $w\in X$, giving what we want. |
H: A basic probability doubt on derangment
Is there any implication that the probability that a random permutation is a derangment is $\frac{1}{e}$ when $n->\infty$ ?
AI: I assume that by derangement, you mean a permutation not fixing any element of the underlying set.
My probability professor in college offered the following interpretation: $(1-e^{-1})$ is the likelihood of a 'coincidence'.
Here's a practical example (I just realized this very example occurs on the wikipedia article). Your probability class has $n$ students where $n$ is very, very large, and so the professor decides to have students grade each other's homework assignments. A permutation $\sigma \in S_n$ is chosen uniformly; this permutation dictates that the assignment belonging to student $i$ is graded by student $\sigma(i)$. Thus a derangement $\sigma$ is a way of assigning students to papers so that nobody grades their own paper, and a 'coincidence' here is the event 'at least one person grades their own paper'.
For $n$ very large, the likelihood of this coincidence is $(1 - e^{-1})$. |
H: A domain with only a (non-zero) prime ideal
What is an example of a domain $A$ such that Spec$A=\{(0),\mathfrak p\}$? For instance one could find a principal ideal domain that is also a local ring but I can't imagine such a ring.
AI: Let $p$ be a prime number and let $\mathbb{Z}_{(p)} = \{ x \in \mathbb{Q} : p \text{ does not divide the denominator of } x \}$. This is a local ring and a principal ideal domain, with maximal ideal $p \mathbb{Z}_{(p)}$. |
H: Set partition of strings by suffix
I have this question:
Is this collection of subsets a partition on the set of bit strings of length 8:
The set of bit strings that end with 111, the set of bit strings that
end with 011, and the set of bit strings that end with 00.
My answer would be no, this is not a partition, as it does not accommodate strings that end in 10 or 01 for example. However, it appears that I am incorrect based on the given answer. Is anyone able to explain to me why I am wrong?
AI: In this case, you are not wrong, for exactly the reason you stated; the purported partition does not contain e.g. 00000001.
Cheers. |
H: Prove: If $g(A)$ is not scalar ($g(A) \neq \lambda I$) $\rightarrow$ $g(A)$ has no real eigenvalues for a given matrix and minimum polynomial
Given $A \in M_{n x n} (\mathbb R)$ such that $m_A(x) = x^2 + 1$ (the minimum polynomial), and let $g \in \mathbb R[x]$. Prove: If $g(A)$ is not scalar ($g(A) \neq \lambda I$) $\rightarrow$ $g(A)$ has no real eigenvalues
I can't really find the connection. So if $m_A(X)$ is like that, it is obvious that it has no -real- eigenvalues, since the minimum polynomial includes all the roots of the characteristic polynomial. Or am I getting something wrong?
AI: Hint: Let $g(x) = (x^2+1)p(x) + qx + r$ for some real polynomial $p$ and some real numbers $q$ and $r$. Then $g(A) = qA + rI$. If $g(A)$ is not a scalar matrix, then $q\neq0$. |
H: Can't figure out this transformation matrix
So basically I want to write a transformation matrix to take me out of one coordinate system and into another.
The transformation has to be as follows:
1) The positive z axis normalized as Vector(0,0,1) has to map to an arbitrary direction vector in the new coordinate system Vector(a,b,c)
2) The origin in the original coordinate system has to map to an arbitrary position P in the new coordinate system.
3) This might be redundant but the positive Y axis has to map to a specific direction vector(d,e,f) which is perpendicular to Vector(a,b,c) from before.
So my question is twofold: 1) How would I go about constructing this transformation matrix and 2) Is this enough data to ensure that any arbitrary vector in coordinate system 1 will be accurately transformed in coordinate system 2?
AI: $(1)$ Request $2$ says that we are working with affine transformations, because you are mapping $O$ to another arbitrary point of $\mathbb{R}^3$. But in order to be a valid affine transformation of the form $ \overrightarrow{v}' = A\overrightarrow{v} + b$ (b is a translation), we must have that $\det A \neq 0$.
to define $A$, you have to figure out how it transform a basis of your space, in your case $\overrightarrow{e_1},\overrightarrow{e_2},\overrightarrow{e_3}$. (they are the normalized of axis x,y,z)
$(2)$ $A \overrightarrow{e_3}$ is fixed by request 1, the same for $\overrightarrow{e_2}$ , but you need to know something about the image of $e_1$. Just because even if you want that it,is,orthogonal to the other images of the elements of the basis you have 2 different options for $A \overrightarrow{e_1}$ (the 2 directions of a vector).
So you can't build an unique transformation with this informations. In other words if you define $A$ as your matrix, then you can define $A'$ with the property of $A' \overrightarrow{e_1} = -A\overrightarrow{e_1}$ and this will satisfy your requests. (to build $A'$ just switch the signs on the first column of $A$) |
H: Validity of residue outside the domain
Using the identity theorem I can see that $f(z)=\dfrac{2}{3+z}$ and hence 1 is true and 4 is false. This far is easy. But for 2 and 3 I can see that $f$ is not defined at $z=3$ and $-3$ is not a isolated singular point of $f$ considering the domain $D.$ Is this a valid logic to strike out 2 and 3?
AI: I think you're missing here the point that you already answered correctly, imo, the question: you chose (1) and that's fine, period.
Now, you don't have to worry what happens outside $\,\Bbb D\,$ as your question's function has existence only in $\,\Bbb D\,$ ...
BTW, your $\,f\,$ also fulfills (3)... |
H: How to express $a_n$?
Let $\{a_n\}$ be a sequence such that $a_{n+2}=a_{n+1}+a_{n}$ with $a_1=1$ and $a_2=1$, i.e., it is $\{1,1,2,3,5,8,13, \dots\}$. I don't know what is $a_n$, i.e., how to express $a_n$?
Thanks for your help:)
AI: The charecteristic equation is
$$r^2=r+1$$
and it's roots are
$$r_1=\frac{1+\sqrt{5}}{2}\quad r_2=\frac{1-\sqrt{5}}{2}$$
hence
$$a_n=\alpha(r_1)^n+\beta(r_2)^n$$
We determine $\alpha$ and $\beta$ by $a_1$ and $a_2$ |
H: shadow cast by a circle
A point source emits light at a circular disc (thickness negligible), and a shadow is left on a wall (XY plane) behind and parallel to the disc. The Z component of distance between the point source and the disc is 'a' units. The Z component of distance between disc and wall is 'b' units. The line joining the point source and the centre of the disc lies on the YZ plane, intersects the wall (XY plane) at origin, and intersects the Z axis at angle theta. If the radius of the disc is r, find the equation of the perimeter of the shadow projected on the XY plane
AI: I think you can use similar triangles to show that the image is a circle centred at the origin and calculate the radius. Take a point $P$ on the circle. Call the position of the light source $S$, the centre of the circle $C$, the origin $O$ and let the image of $P$ in the $XY$ plane be $Q$.
Then $CP$ is parallel to $OQ$ because the circle lies in a plane parallel to the $XY$ plane, so $\triangle SOQ$ is similar to $\triangle SCP$. The similarity ratio is $\frac {SO}{SC}$ and the distance $OQ = r\frac {SO}{SC}$. |
H: Calculating determinant of matrix
I have to calculate the determinant of the following matrix:
\begin{pmatrix}
a&b&c&d\\b&-a&d&-c\\c&-d&-a&b\\d&c&-b&-a
\end{pmatrix}
Using following hint:
Calculate determinant of matrix $AA^{T}$ and use the theorem that
$\det(AB) = \det A \cdot \det B$
I simply don't see how can I use this hint in this calculation.
AI: We have
$$AA^T=(a^2+b^2+c^2+d^2)I_4$$
then
$$\det(A)^2=\det (AA^T)=(a^2+b^2+c^2+d^2)^4$$
hence
$$\det(A)=-(a^2+b^2+c^2+d^2)^2$$
and we choose the negative sign since the $\det$ function is continuous with its components and $\det(A)=-1$ in the case $a=1$ and $b=c=d=0$ |
H: Prove that if $\gcd{f, P_A} = 1$ for some matrix $A$ and polynom $f$ then $f(A)$ is invertible
Let $f \in F[x]$ and $A \in M_{n x n} (\mathbb F)$. Prove: If $\gcd\{f,P_A\} = 1 \rightarrow f(A)$ is an invertible matrix.
This is what I did so far:
If the $\gcd\{f,P_A\} = 1$ then $f$ and $P_A$ have no common divisors, and do not have the same roots. From the Cayley-Hamilton rule we know that $P_A(A) = 0$. In addition, $\gcd\{f, m_A\}=1$ because $m_A$ and $P_A$ have the same roots. Now I said: $f(A) \neq 0$, because the ideal $ I= \{g(x) | g(A)=0\}$ is created by $m_A$ and since $\gcd\{f, m_A\} = 1$ then $f(A) \not \in I$ which means $f(A) \neq 0$. That means that the element $(xI - A)$ is not in $f$.
And right there is where I could not continue.
Any direction would be appreciated!
AI: By Bezout identity and since $\gcd(f,P_A)=1$ then there's $U,V\in\mathbb{F}[x]$ s.t.
$$Uf+VP_A=1$$
hence
$$U(A)f(A)+V(A)P_A(A)=U(A)f(A)=I$$
so $f(A) $ is invertible.
Added this is another proof without use of Bezout identity
The matrix $f(A)$ is invertible if and only if $0$ isn't one of it's eigenvalues, moreover it's easy to see that if $\lambda$ is an eigenvalue of $A$ then $f(\lambda)$ is an eigenvalue of $f(A)$ and since $\gcd(f,P_A)=1$ then $f$ and $P_A$ haven't a common root hence we can conclude. |
H: A question on an unbounded function
Does there exist a function $f$ such that it has a finite value for each point $x$ of $[0,1]$, however for any nbhd of $x$ $f$ is unbounded?
Thanks for your help.
AI: I think, such function can be given by
$$
f(x) = \begin{cases}
0,\text{ if }x\notin\Bbb Q
\\
n,\text{ if }x = m/n \in \Bbb Q,
\end{cases}
$$
where the representations of rationals as fractions is assumed to be irreducible.
Clearly, the function is finite (point-wise). To show that it is unbounded consider the following argument.
If $x$ is irrational, you can consider the sequence $x_k\to x$ where $x_k$ is a decimal representation of $x$ up to the $k$'th digit. It shall be clear, that $f(x_k)$ grows unboundedly: there will be an infinite number non-zero digits in the decimal representation of $x$, so that $f(x_k)$ contains an infinite number of powers $10^k$.
If $x$ is rational, when you fix a neighborhood of it you're interested in, pick there any irrational number $x'$ and repeat the procedure above. |
H: Arithmetic with the natural log
We have:
$$ \ln(p^3 + 4) - \ln(4) = 2$$
What I did is:
$$ \ln (p^3 + 4) = \ln(4) + \ln(e^2)$$
$$p^3 + 4 = 4 + e^2$$
$$ p = e^{2/3}$$
Why is this incorrect?
AI: Correct your mistake using this equality
$$\log (a)+\log(b)=\log(ab)$$ |
H: If $a \mid c, b \mid c, \gcd (a,b)=1$ then $ab \mid c.$
If $a \mid c, b \mid c, \gcd (a,b)=1$ then $ab \mid c.$
I understand that given problem is true. however im struggling with writing to prove.
I let A=2 , B= 3 , C= 6
2 l 6= 3
3 I 6=2
3*2 l 6=1
I have shown my work to prove that the theorem is true however I can't write in words.
how can you write it in words to prove it?? I think proving the theorem or proof is the hardest part of the course
AI: Suppose that $a\mid c,b\mid c$. We have $a\mid \dfrac{c}{b}b$. Since $(a,b)=1$, this means $a\mid \dfrac c b$, which gives $ab\mid c$. Note we used Euclid's Lemma.
The other option is using Bezout: since $(a,b)=1$; there exist $s,t\in\Bbb Z$such that $$at+sb=1\tag 1$$
Write then $c=aj$, $c=bk$. Then Multiplying by $c$ in $(1)$ gives
$$act+scb=c\tag 2$$
And replacing $c$ by $aj$ and $bk$ appropriately in $(2)$, we get
$$a(bk)t+s(aj)b=c$$
which gives
$$(ab)(kt+sj)=c$$ which is saying $ab\mid c$
ADD (A version of) Euclid's Lemma says
Suppose $a\mid bc$ and $(a,b)=1$. Then $a\mid c$.
That is, if a number $a$ divides a product $bc$, and is coprime to the factor $b$, then it must divide the other.
P Since $(a,b)=1$, $ab=\operatorname{lcm}(a,b)$. But since $a\mid bc$ and $b\mid bc$; it follows $bc$ is a common multiple of $a$ and $b$. It follows the least common multiple of them, $ab$, divides them. That is $ab\mid bc$. But this means $a\mid c$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.