text
stringlengths 83
79.5k
|
---|
H: Partial integration
We want to integrate
$$ f(x) = 2x \cos(x)$$
We use partial integration where $2x = g$ and $\cos(x) = f'$
I end up with
$$d (2x \sin(x)) - \sin(x) \cdot d2x$$
What confuses me is the term $\sin(x) \cdot d2x$
How would I be able to evaluate this term?
AI: $$ f(x) = 2x \cos(x)$$
We use partial integration where $2x = g$ and $\cos(x) = f'$
I know what you mean but if $ f(x) = 2x \cos(x)$, $f'(x)$ is not equal to $\cos x$. So I denote $$u(x)=2x,\qquad v^{\prime }(x)=\cos x.$$
Then
$$
\begin{eqnarray*}
u^{\prime }(x) &=&2 \\
v(x) &=&\int \cos x\ dx=\sin x.
\end{eqnarray*}
$$
From
$$
\begin{equation*}
I=\int u(x)v^{\prime }(x)\ dx=u(x)v(x)-\int u^{\prime }(x)v(x)\ dx,
\end{equation*}
$$
we get
$$
\begin{eqnarray*}
I &=&\int \underset{u(x)}{\underbrace{2x}}\cdot\text{ }\underset{v^{\prime }(x)}{\underbrace{\cos x}}dx=\underset{u(x)}{\underbrace{2x}}\cdot\underset{u(x)}{\underbrace{\sin x}}-\int \underset{u'(x)}{\underbrace{2}}\cdot\underset{v(x)}{\underbrace{\sin x}}\ dx \\
&=&2x\sin x-(-2\cos x)+C \\
&=&2x\sin x+2\cos x+C.
\end{eqnarray*}
$$
What confuses me is the term $\sin(x) \cdot d2x$
The term we got was $\sin(x) \cdot 2x$.
Added: We have selected $u(x)$ and $v(x)$ according to the LIATE rule (L ogarithm I nverse trigonometric A lgebraic T rigonometric E xponential). By doing this, we try to:
Find easily $v(x)$ from $v'(x)$, and
Evaluate $\int v(x)u'(x)\ dx$ easier than $\int u(x)v'(x)\ dx$.
This rule works most of the times because when we differentiate the polynomial (algebraic function) $u(x)=2x$ we get a simpler algebraic function and when we integrate the direct trigonometric function $v'(x)=\cos x$ we get another direct trigonometric function. |
H: Big-O Notation and Algebra
This is my first question here. Trying to simplify the following.
$$f = O\left(\frac{5}{x}\right) + O\left(\frac{\ln(x^2)}{4x}\right)$$
I give it a try as follows.
$$\begin{align}
f &= O\left(\frac{5}{x}\right) + O\left(\frac{2\ln(x)}{4x}\right), \\
f &= O\left(\frac{5}{x}\right) + O\left(\frac{\ln(x)}{2x}\right), \\
f &= O\left(\frac{10}{2x}\right) + O\left(\frac{\ln(x)}{2x}\right), \\
f &= O\left(\frac{10 + \ln(x)}{2x}\right), \\
f &= O\left(\frac{\ln(x)}{2x}\right).
\end{align}$$
Is this OK?
AI: Yes, that's ok.
Note that you can always omit non-zero multiplicative constants in $O(\cdot)$-terms. For instance, $O(4x) = O(x)$ and $O(2/x) = O(1/x)$. So you could get rid of those constants right away, and get a somewhat shorter version of the proof:
$$f = O\left(\frac{5}{x}\right) + O\left(\frac{\ln(x^2)}{4x}\right) = O\left(\frac{1}{x}\right) + O\left(\frac{\ln x}{x}\right) = O\left(\frac{\ln x}{x}\right).$$
But nothing is wrong with your derivation either. |
H: Can this equation be factored down? $\frac{(2^{y}-2) - 2^{y-x}}{2^{y}-1} $
Can this equation be factored down so as to be smaller? Or is this as small as it will go?
$$
\frac{(2^{y}-2) - 2^{y-x}}{2^{y}-1}
$$
AI: Let $y=3, x=1$, then $2^y-1=7$ and your expression becomes $\cfrac 27$. Whenever the denominator is a prime $p$ and $y\gt 2, y\gt x \gt 0$ the numerator will be an integer less than $p$. So there can be no general cancellation.
However cancellation will occur in come cases, for example when $y=4, x=1$ we get $\cfrac 6{15}=\cfrac25$. |
H: Possible dimensions for a sum of two vector spaces
Let $U, W \subseteq \mathbb{R}^6$ vector subspaces from dimension $4$. Show that $U \cap W$ holds at least $2$ linearly independent vectors and at most $4$ linearly independent vectors.
Using the dimensions theorem I showed by assuming the opposite that $0 \leq dim(U+W) \leq 3$ or $7 \leq dim(U+W) \leq 8$
But I'm not sure on why this is a contradiction. I understand that if $7 \leq dim(U+W) \leq 8$ than there's no possible way a subspace of $\mathbb{R}^6$ would have a base that has $\geq 7$ vectors. I'm not sure why that base can't have $3$ vectors or less.
AI: You should know the basic result about this, namely Grassmann's formula
$$
\dim(U+W)=\dim U+\dim W-\dim(U\cap W).
$$
Since $U$ and $W$ are four dimensional subspaces of $\mathbb{R}^6$, you surely have
$$
4\le\dim(U+W)\le 6
$$
Can you go on from here? |
H: Is it sufficient to claim that the limits of $\frac{x+2}{x^2+x+1}$ and $\frac1x$ must be the same?
I was trying to evaluate this limit Calculate: $\lim\limits_{x \to \infty}\left(\frac{x^2+2x+3}{x^2+x+1} \right)^x$ . After doing long division, I got like everybody else limit as $x$ approaches infinity of $(1+(x+2)/(x^2+x+1))^x$. Would it be a mistake to say that lim as $x$ approaches infinity of $(x+2)/(x^2+x+1)$=lim as x approaches infinity of $1/x$ and just conclude that the answer is $e$? I know I would be right in this situation, but can I approximate things like this in general? Thanks!
AI: I am sure you are aware that it is not enough to say that the limit as $x$ goes to infinity of $\frac{1}{x}$ is "the same as" the limit of $\frac{x+2}{x^2+x+1}$.
However, what you probably meant is that the limit of the ratio of the two quantities is $1$. That is,
$$\lim_{x\to\infty} \frac{\frac{1}{x}}{\frac{x+2}{x^2+x+1}}=1.$$
That is enough.
With some experience, one can scan the expression and see that for large $x$, $\frac{x+2}{x^2+x+1}$ "behaves like" $\frac{1}{x}$, and therefore the limit must be $e$. Of course, everything changes if $x^2$ is replaced by $2x^2$.
At this stage of your mathematical development, you would be expected to go through further detail. For example, we can start by noting that the exponent is $\frac{x^2+x+1}{x+2}\frac{x(x+2)}{x^2+x+1}$. Once we have done this sort of thing a few dozen times, then, with some caution, we can begin to take jumps. |
H: Endomorphim Ring of Abelian Groups
In the paper
"Über die Abelschen Gruppen mit nullteilerfreiem Endomorphismenring."
Szele considers the problem of describing all abelian groups with endomorphism ring contaning no zero-divisors. He proved that there is no such group among the mixed groups. While $C(p)$ and $C(p^\infty)$ are the only torsion groups of this property.
I do not have access to this paper. Moreover, I do not speak German, someone can give me a reference, in English or French, for this result or Sketch the Proof?
AI: Kulikov proved that an indecomposable abelian group is either torsion-free or $C(p^k)$ for some $k=0,1,\dots,\infty$. A direct summand creates a zero divisor in the endomorphism ring: Let $G = A \oplus B$ and define $e(a,b) = (a,0)$. Then $e^2=e$ and $e(1-e) = 0$. However $1-e$ is the endomorphism that takes $(a,b)$ to $(0,b)$ so it is not zero either. |
H: Quadratic equation with parameter
Stuck solving this equation.
Full text:
For what real values of the parameter do the common solutions of the equation became identical?
1. y = mx - 1
2. x^2 = 4y
ans. m = +/- 1
2a. x^2 - 4y = 0
So i started by substituting 1. into 2a.
x^2 - 4 * (mx - 1) = 0
x^2 -4mx + 4 = 0
Solutions must be identical so we must have
x1 = x2 and y1 = y2
From the book:
Therefore the two common solutions of (1) and (2) become identical when and only when the roots of the equation are equal; that is when the discriminant vanishes
d = B^2 - 4AC
A = x^2
B = -4mx
C = 4
Here i got stuck
d = (-4mx)^2 - 4 * x^2 * 4
(-4mx)^2 - 4 * x^2 * 4 = 0
(-4mx)^2 + 16 * x^2 = 0
looking at previous solutions, i should get the value of m from determinant by getting a trinomial that i could factor. I am unable to form a trinomial.
So how can this be solved?
AI: The discriminant of a quadratic in $\,x\, $ does not involve $\,x\,$ but only the coefficients, so
$$x^2-4mx+4=0\implies \Delta=(-4m)^2-4\cdot1\cdot4=16m^2-16=0\iff$$
$$\iff 16m^2=16\iff m^2=1\iff m=\pm\,1$$ |
H: "Total" degree of a polynomial?
What is the difference between the "degree" of a polynomial and its "total degree"?
AI: The total degree of a polynomial in more than one variable is the maximal of the sums of all the powers of the variables in one single monomial. For example
$$\deg(x_1^2x_2x_3^4-3x_2+4x_1x_4^5-x_1x_2^3x^2)=7$$
You can also define the local or particular degree for some particular variable, say:
$$\deg_1(x_1^2x_2x_3^4-3x_2+4x_1x_4^5-x_1x_2^3x^2)=2\;,\;\;\deg_4(x_1^2x_2x_3^4-3x_2+4x_1x_4^5-x_1x_2^3x^2)=5\ldots$$ |
H: Relationship between number of conjugacy classes and number of irreducible representations of a group
For a finite group G the number of irreducible representations over an algebraically closed field F is at most the number of conjugacy classes whose sizes are coprime to the characteristic of F.
What about over fields that aren't algebraically closed?
The motivation for this question is essentially knowing when all of the irreducible representations of a group over a particular field (in particular a finite one) have been found.
As an example, $A_4$ has four conjugacy classes and has four irreducible representations over fields such as $\mathbb{C}$ or $\mathbb{F}_7$ but only has three irreducible representations over $\mathbb{F}_5$. In the first two cases the original theorem holds so we know we have found all irreducible representations but for the third case, how do we know that there aren't any other possible irreducible representations?
AI: You'll want to read about $K$-conjugacy classes which are exactly what you are asking about and described in this question.
For example, the $\mathbb{F}_5$-conjugacy classes of $A_4$ are precisely the sets of elements with the same order. Usually the elements of order 3 split into two, but they are related by $x$ in one class means $x^{-1}$ in the other. However, $x^5$ and $x$ are always in the same $\mathbb{F}_5$-class, and $x^5 = x^{-1}$ when $x$ has order 3.
Pazderski, Gerhard.
"On the number of irreducible representations of a finite group."
Arch. Math. (Basel) 44 (1985), no. 2, 119–125.
MR780258
DOI:10.1007/BF01194075 |
H: How to calculate the integral of $x^x$ between $0$ and $1$ using series?
How to calculate $\int_0^1 x^x\,dx$ using series? I read from a book that
$$\int_0^1 x^x\,dx = 1-\frac{1}{2^2}+\frac{1}{3^3}+\dots+(-1)^n\frac{1}{(n+1)^{n+1}}+\cdots$$ but I can't prove it. Thanks in advance.
P.S: I found some useful materials here and here.
AI: Just write
$$x^x=e^{x\ln x}=\sum_{n=0}^{\infty}\frac{(x\ln x)^n}{n!}$$
and use that
$$\int_0^1(x\ln x)^n dx=\frac{(-1)^n n!}{(n+1)^{n+1}}.$$
To show the last formula, make the change of variables $x=e^{-y}$ so that
$$\int_0^1(x\ln x)^n dx=(-1)^{n}\int_0^{\infty}y^ne^{-(n+1)y}dy,$$
which is clearly expressible in terms of the Gamma function. |
H: Are function spaces Baire?
Let $X$ and $Y$ be manifolds and suppose that $X$ is a compact, complete metric space and $Y$ is a complete metric space.
So, both $X$ and $Y$ are Baire spaces.
Question: For what values of $k\geq 0$ is the space $C^k(X,Y)$ (with the $C^k$-topology) a Baire space?
For $k=0$, the space $C^0(X,Y)$ is a Baire space because it is a complete metric space with respect to the uniform metric (since $Y$ is complete).
What about the case $k=\infty$?
References are also very much appreciated.
AI: The short answer is: yes, they are Baire spaces for all $k$, including $k = \infty$.
As long as $X$ is compact, there is essentially no choice about the "correct" topology on $C^k(X,Y)$ and the $C^0$ case elegantly implies the $C^k$-case using the device of $k$-jet bundles $J^k(X,Y)$. These yield a natural identification of $C^k(X,Y)$ with a closed subspace of $C^0(X,J^k(X,Y))$ (also for $k = \infty$).
If we drop compactness of $X$ (but assume that $X$ is finite-dimensional, hence locally compact), the same technique works, but now there are two distinct reasonable topologies on $C^r(X,Y)$, the weak and strong Whitney topologies. Those still have the Baire property (the strong topology is usually not metrizable).
A solid reference with full details is Hirsch, Differential Topology, chapter 2, especially section 4. See in particular Theorem 4.4 and the paragraphs following it. |
H: Calculating the determinant of this complicated matrix
I am calculating the characteristic polynomial for this matrix:
$$A = \begin{pmatrix} 1 & 2 & \cdots & n \\ 1 & 2 & \cdots & n \\ \vdots & \vdots & \cdots & \vdots \\ 1 & 2 &\cdots & n \end{pmatrix}$$
First I was asked to figure out that $0$ is an eigenvalue, and since it is not invertible then $0$ is an eigenvalue, and its' geometric multiplicity is $n-1$. Now I need to calculate the characteristic polynomial but I am finding this determinant hard!
$$\mbox{det}\begin{pmatrix} \lambda - 1 & -2 & \cdots & -n \\ -1 & \lambda - 2 & \cdots & -n \\ \vdots & \vdots & \cdots & \vdots \\ -1 & -2 & \cdots & \lambda -n \end{pmatrix} = ? $$
AI: The last eigenvalue $\lambda$ is the trace of the matrix $A$ so
$$\lambda=1+2+\cdots+n=\frac{n(n+1)}{2}$$
hence the chararcteristic polynomial is
$$\chi_A=x^{n-1}\left(x-\frac{n(n+1)}{2}\right)$$ |
H: Introduction to Pseudodifferential operators
I'm interested in elementary introduction to pseduodifferential operators and its application to hyperbolic PDE's. I know measure theory, Fourier analysis and some elementary(linear) hyperbolic PDE's but not functional analysis, distributions, Sobolev spaces,etc. Can you recommend suitable intro text? Thanks
AI: There is of course Hörmander's magnum opus The Analysis of Linear Partial Differential Operators (Springer); pseudodifferential operators are discussed in volume III.
Less technical is Michael Taylor's book Pseudodifferential Operators (Princeton University Press). He also has a set of lecture notes and a pdf of his book Pseudodifferential Operators and Nonlinear PDEs (Birkhäuser) on his website. |
H: Dropping the orientable condition from the Thom isomorphism theorem.
My first question is: what are some examples of un-oriented n-plane bundles over $ \mathbb{Z}$ where it is easy to see that there is no Thom class?
I would like to know some examples because the real question I have is: Where in the proof (In Milnor's book) of the existence of the Thom class do we use the fact that the bundle is oriented? To say a little more (and probably something stupid), it seems to me in the base case where the bundle is trivial we may take $ u = 1 \times e^n \in H^n(B\times \mathbb{R}, B \times \mathbb{R}_0)$ where $1 \in H^0 (B)$ and it will restrict to a non trivial form on each fiber.
AI: An example is the Möbius bundle $E \to S^1$. The Thom space is $\mathbb RP^2$ which does not have the same $\mathbb Z$ cohomology (modulo a shift by 1) as $S^1$ since $H^*(\mathbb R P^2)$ has torsion. To see that $\mathbb R P^2$ is the Thom space, note that the unit disk bundle is a rectangle with one pair of opposite ends identified with a twist (the closed Möbius strip). Then identifying all of the edges to a point amounts to gluing the 2-ball to itself via the antipodal map on the boundary, which gives $\mathbb R P^2$.
As for your real question, orientability is equivalent to the existence of an element $u \in H^n(E, E_0)$ that restricts to a generator on each fiber. In the case of the trivial vector bundle you are using a choice of orientation. In general this won't be possible. If you think in terms of finding a non-zero section of $\Lambda^n E$, $n = rk~E$, you can always do this locally as $e_1 \wedge \cdots \wedge e_n$, where $e_1,\ldots,e_n$ is an orthonormal basis. If you chose a different orthonormal basis (e.g. a different trivialization) then this can change by $\pm 1$, and there is a topological restriction to making this choice consistently. It would be a good exercise to work this out on the Möbius band (where you can take a cover where the transition function is just multiplication by -1) to see why there doesn't exist a global top form (here just a single non-vanishing section and for real line bundles, orientability is equivalent to being trivial). |
H: Why dividing by zero still works
Today, I was at a class. There was a question:
If $x = 2 +i$, find the value of $x^3 - 3x^2 + 2x - 1$.
What my teacher did was this:
$x = 2 + i \;\Rightarrow \; x - 2 = i \; \Rightarrow \; (x - 2)^2 = -1 \; \Rightarrow \; x^2 - 4x + 4 = -1 \; \Rightarrow \; x^2 - 4x + 5 = 0 $. Now he divided $x^3 - 3x^2 + 2x - 1$ by $x^2 - 4x + 5$.
The quotient was $x + 1$ and the remainder $x - 6$. Now since $\rm dividend = quotient\cdot divisor + remainder$, he concluded that $x^3 - 3x^2 + 2x - 1 = x-6$ since the divisor is $0$.
Plugging $2 + i$ into $x - 6$, we get $-4 + i$.
But my question is, how was he able to divide by zero in the first place? Why did dividing by zero work in this case?
AI: The division your teacher did is polynomial division. He did not divide by zero; he divided by $x^2 - 4x + 5$.
The long division he did is just an algorithm that allows you to get the following identity:
$$x^3 - 3x^2 + 2x - 1 = (x^2 - 4x + 5)(x+1) + x-6$$
This is an identity involving multiplication, not division. Now, when you plug in $x = 2 + i$, you're not dividing by zero; you're multiplying by zero, which you should agree is allowed. |
H: Where can I learn about the lattice of partitions?
A set $P \subseteq \mathcal{P}(X)$ is a partition of $X$ if and only if all of the following conditions hold:
$\emptyset \notin P$
For all $x,y \in P$, if $x \neq y$ then $x \cap y = \emptyset$.
$\bigcup P = X$
I have read many times that the partitions of a set form a lattice, but never really considered the idea in great detail. Where can I learn the major results about such lattices? An article recommendation would be nice.
I'm also interested in the generalization where condition 3 is disregarded.
AI: G. Birkhoff, Lattice Theory. Providence, Rhode Island, 1967,
Chapt.4, sec.9. |
H: Generators of a subgroup of $SL_2(\mathbb{Z}/24\mathbb{Z})$
So I have this subgroup of $SL_2(\mathbb{Z}/24\mathbb{Z})$ which has $256$ elements. Is there a way in sage to get the list of its generators ? The "only" information I have on the group is the list of its elements.
If it is not implemented in Sage does anyone have a reference for an algorithm to do this ?
AI: I did it by using GAP. Look at this and I hope it works, unless, tell me to remove that.
gap> SL24:=SL(4,Integers mod 24);
gap> T:=GeneratorsOfGroup( SL24);;
gap> for i in [1..Size(T)] do Print(T[i],"\n"); od;
[ [ ZmodnZObj( 0, 24 ), ZmodnZObj( 23, 24 ), ZmodnZObj( 0, 24 ),ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ) ],
[ ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ) ] ]
[ [ ZmodnZObj( 0, 24 ), ZmodnZObj( 23, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ) ] ]
[ [ ZmodnZObj( 1, 24 ), ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ), ZmodnZObj( 0, 24 ) ],
[ ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 0, 24 ), ZmodnZObj( 1, 24 ) ] ] |
H: How do I calculate a double integral like $\iint_\mathbf{D}e^{\frac{x-y}{x+y}}dx\,dy$?
The Problem is to integrate the following double integral $\displaystyle\iint_\mathbf{D}\exp\left(\frac{x-y}{x+y}\right) dx\, dy$ using the technique of transformation of variables ($u$ and $v$).
There is a given $D$ with $D:= \{(x,y):\ x\geq 0,\ y \geq 0,\ x+y \leq 1\}$
My Approach: wasn't quite successful at all. I am quite familar with calculus, but i am stuck at multiple integrals. I thought about the following transformation: $u = x-y$ and $v = x+y$
But i don't know the next step.
Another suggestion: i thought about the integration of the e-Function. I have in mind that $\int e^x dx = e^x$. So maybe: $$\iint_\mathbf{D} \exp\left(\frac{x-y}{x+y}\right) dx\ dy = \exp\left(\frac{x-y}{x+y}\right) ?$$
AI: Consider the transformation:
$u=x+y$ and $v=\dfrac{x}{x+y}$
$\implies x=uv$ and $y=u(1-v)$.
Here our domain is $D= \{(x,y):\ x\geq 0,\ y \geq 0,\ x+y \leq 1\}$
As $ x\geq 0,\ y \geq 0,\ x+y \leq 1\\\implies uv\geq 0,\ u(1-v) \geq 0,\ u \leq 1\\\implies 0\leq u\leq 1,0\leq v \leq 1$
The transformed domain is $E=\{(u,v):0\leq u\leq 1,0\leq v \leq 1\}$
Again the jacobian id $J=-u$
Hence $\int_\mathbf{D}\int e^{\dfrac{x-y}{x+y}}\, dx\, dy\\=\int_0^1\int_0^1e^{\dfrac{uv-u(1-v)}{u}}|-u|\, du\, dv\\=(\int_0^1u\,du)( \int_0^1e^{2v-1}\,dv) $ |
H: Maxima of sum of exponential.
I want to find the maxima of $f=e^{-x}+e^{-2x}$, $x \ge 0$. I know the maximum happens at $x=0$, however when I differentiate and equate to zero I get : $\,e^{x}=-2$ which leads to $x=\ln(-2)$. Can any one point why the differentiation method doesn't lead to the correct answer?
AI: Note that $$f'(x)=-e^{-x}-2e^{-2x}=-e^{-2x}(e^x+2)=-\frac{e^x+2}{e^{2x}}$$ is negative for all $x$ in the domain of definition of $f$, so $f$ is strictly decreasing. Then $f$ has a maximum precisely if its given domain of definition has a least element, in this case $0$.
We can only use the first derivative test (setting the first derivative equal to $0$ and solving) at points on the interior of the domain of definition. Indeed, your work shows that $f$ has no maximum on the positive numbers. However, if we look to the single boundary point of the domain of definition (namely $x=0$), we can manually show that $f$ has a maximum there (using an argument like the one I gave above). We simply can't apply the first derivative test there, and this example is an excellent reason why.
More generally, suppose $D\subseteq\Bbb R$, and that $g:D\to\Bbb R$ is a function that is differentiable in the interior of $D$ and continuous on $D$. Then if $g$ obtains a maximum, it will be either at a critical point in the interior of $D$, or at a point on the boundary of $D$. |
H: How to (quickly) prove that $24p+17$ is not a square number
Computer says $24p+17$ is not square number for $p<10^7$ so I guess it's not. I know that squares of odd numbers are all $8p+1$ but $24p+17$ passes the test
And how to solve problems like this in general? Thx in advance
AI: The number is $2 \pmod 3$. No such number is perfect square.
If you are not familiar with the modular notation, any perfect square is either of the form $3k$ or $3k+1$. Your number is $3k+2$. |
H: Absolute value of infinite sum smaller than infinite sum of absolute values
A question emerging from an exercise in Ok, E. A. (2007). Real Analysis with Economic Applications. Princeton University Press.
The exercise consists in showing that if $\sum_{i=1}^\infty x_i$ converges, then
$|\sum_{i=k}^\infty x_i| \leq \sum_{i=k}^\infty |x_i|$.
I do not get why the condition $\sum_{i=1}^\infty x_i$ is necessary. Why can't we say in general that
$|\sum_{i=k}^\infty x_i| = |~lim_{n\rightarrow \infty}~ x_k + x_{k+1} + \dots + x_{k+n}|$
$~~~~~~~~~~~~~~= ~lim_{n\rightarrow \infty}~ |~x_k + x_{k+1} + \dots + x_{k+n}|$
$~~~~~~~~~~~~~~\leq ~lim_{n\rightarrow \infty}~ |x_k| + |x_{k+1}| + \dots + |x_{k+n}|,$ by the triangular inequality
$~~~~~~~~~~~~~~ = ~\sum_{i=k}^\infty |x_i|$
Is it that $|\sum_{i=k}^\infty x_i|$ is not well-defined when $\sum_{i=k}^\infty x_i \pm \infty$, in which case the convergence of $\sum_{i=1}^\infty x_i$ would be a way to guarantee that $\sum_{i=k}^\infty x_i \neq \pm \infty$? The standard definition of the absolute value function is over reals, and not the extended reals but it seems like it is not a fundamental problem to extend it to $\infty$ and $-\infty$?
AI: Just so that I can mark the question as answered in two days:
As Brian noted "There are worse ways for $\sum_{i=1}^{\infty}x_i$ to be undefined: consider $x_i=(−1)i.$" |
H: Changing bounds of integrals
If I have:
$$\int_{L}^{\infty }e^{\dfrac{-(x-\sigma \sqrt{T})^2}{2}}\,dx$$
let $y = x - \sigma \sqrt{T}$
$$\int_{L - \sigma \sqrt{T}}^{\infty }e^{\dfrac{-y^2}{2}} \, dy$$
Why does the lower bound change in the new integral?
AI: The lower bound was from $x=L$, after the transformation, we get $y|_{x=L}=x-\sigma\sqrt T |_{x=L} = L-\sigma \sqrt T $, so the lower bound goes from $y=L-\sigma\sqrt T$.
Intuitively, you're performing the integration according to a new variable $y$, which is just a translation of $x$, so you need to translate the boundaries appropriately. Of course, the infinite boundary will remain infinity. |
H: Why are continued fractions for irrational numbers always convergent?
Like in the title:
Why are continued fractions for irrational numbers (i.e. infinite fractions) always convergent?
AI: Normally this is proved in steps.
The even convergents (i.e. the second, fourth, etc.) are an increasing sequence.
The odd convergents are a decreasing sequence.
All the even convergents are less than all the odd convergents.
The gap between consecutive convergents (one of which is even, and one is odd), decreases as you go further out in the sequence. |
H: $A \in M_{n \times n} (\mathbb R)$, $n\geq 2$, rank($A) = 1$, trace($A) = 0$. Prove A is not diagonalizable
Given:
$A \in M_{n \times n} (\mathbb R)$, $n\geq 2$, rank($A) = 1$, trace($A) = 0$. Prove A is not diagonalizable and find $P_A(x)$.
So I said:
if $n \geq 2$ and rank($A)=1$ then $A$ is not invertible. That means that it has an eigenvalue 0. Its' geometric multiplicity is $n-1$, since, again, rank$(A)=1$. Now we also know that the trace is the sum of all the eigenvalues which means $\operatorname{trace}(A) = 0^{n-1} + \lambda_x = \lambda_x = 0$ which means ALL the eigenvalues are $0$.
Is that correct? and if so, does it mean that it is not diagonalizable? (and if its correct then of course $P_A(x) = \lambda^{n}$.
AI: Yes, it's correct so far.
But probably it is better to say the roots of characteristic polynomial instead of the eigenvalues, and you should note somewhere that they may also be complex numbers.
There is also one tiny error/mistype: it should be ${\rm trace}(A)=(n-1)\cdot 0+\lambda_x$.
Yes, we have that the characteristic polynomial is $P_A(x)=x^n$.
And, for finishing, either you can consider the Jordan form of $A$: it must have only $0$'s in the diagonal... (A small example is $A=\pmatrix{0&1\\0&0}$.)
Or, you can conclude that such a diagonalizable matrix could be only ${\rm diag}(0,0,0,0,..,0)$ which is the null matrix, but this has rank $0$. |
H: Reference request, self study of cardinals and cardinal arithmetic without AC
I'm looking for references (books/lecture notes) for :
Cardinality without choice, Scott's trick;
Cardinal arithmetic without choice.
Any suggestions? Thanks in advance.
AI: Jech, The Axiom of Choice.
Herrlich, The Axiom of Choice.
Halbeisen, Combinatorial Set Theory.
Jech, Set Theory, 3rd Millennium Edition.
Jech's (first) book is kinda old, and some progress has been made since then, but I don't think there has been a lot that we can say about cardinal arithmetic that was discovered since that book was published (on their order, other structure properties and complexities - sure).
Herrlich's book is not a set theoretical book per se, but it has a reasonable chapter about basic failures of cardinal arithmetics. In particular with the existence of infinite Dedekind-finite sets, which give us a great source of interest for counterexamples.
For the most part, let me tell you what we know about cardinal arithmetic without the axiom of choice:
The basic addition, multiplication and exponentiation is well-defined as finitary operations. Those are easily found in any set theoretical textbook.
Everything else can fail miserably.
Some interesting papers:
Rubin, Jean E. Non-constructive properties of cardinal numbers. Israel J. Math. 10 (1971), 504–525.
Halbeisen, Lorenz; Shelah, Saharon Consequences of arithmetic for set theory. J. Symbolic Logic 59 (1994), no. 1, 30–40.
Halbeisen, Lorenz; Shelah, Saharon Relations between some cardinals in the absence of the axiom of choice. Bull. Symbolic Logic 7 (2001), no. 2, 237–261. |
H: Derivative of $\left(x^x\right)^x$
I am asked to find the derivative of $\left(x^x\right)^x$. So I said let $$y=(x^x)^x \Rightarrow \ln y=x\ln x^x \Rightarrow \ln y = x^2 \ln x.$$Differentiating both sides, $$\frac{dy}{dx}=y(2x\ln x+x)=x^{x^2+1}(2\ln x+1).$$
Now I checked this answer with Wolfram Alpha and I get that this is only correct when $x\in\mathbb{R},~x>0$. I see that if $x<0$ then $(x^x)^x\neq x^{x^2}$ but if $x$ is negative $\ln x $ is meaningless anyway (in real analysis). Would my answer above be acceptable in a first year calculus course?
So, how do I get the correct general answer, that $$\frac{dy}{dx}=(x^x)^x (x+x \ln(x)+\ln(x^x)).$$
Thanks in advance.
AI: in step
$$\dfrac{dy}{dx}=y(2x\ln x+x)$$
$$\dfrac{dy}{dx}=y(x\ln x+x\ln x+x)$$
in question $y=(x^x)^x$
$$\dfrac{dy}{dx}=(x^x)^x(x\ln x+\ln x^x+x)$$
just rearrange your second last step |
H: Element of order $2n$ in symmetric group $S_n$
I've been recently reading some articles about orders of elements in $S_n$ and I know that in order to find max order in $S_n$ we can use Landau function though I think that for small $n$ it is better to do it "manually".
My question is: For what $n$ can $S_n$ contain an element of order $2n$?
Could you tell me how to answer that question.
It's not possible for $S_3$ to have such element, because here $3-$cycle has order $3 \neq 6$.
Similarly for $S_4$ (max ord = $4$),
for $S_5$ (max ord = $6$ for cycles $(ab)(cde)$),
and for $S_6$ - here we have max ord = $6$.
for $S_7 $ max ord = $12$ for cycles $(abc)(defg)$. $14=2/cdot7$, but $2+7=9$
Then for $S_8$ we have max ord = $15$ for $(abc)(defgh)$,
for $S_9$ we have max ord =$20$ for $(abcd)(efghi)$ but we won't get $18$ because $18$ cannot be written as a product of two coprime numbers $\in \{1,2,...,8\}$
It seems that it can occur if we can split permutation into disjoint cycles which lengths are coprime. But this is not sufficient.
Could you help me with that?
Thank you.
AI: Suppose that $n$ is not a power of a prime. Then we can write $n=ab$ where $a$ and $b$ are coprime and both at least $2$. Moreover we may assume that we shoved all the factors of $2$ into $a$, so that $b$ is odd. The plan is to prove that $2a+b\leq n$, so we can take the disjoint product of a $2a$-cycle and a $b$-cycle to get an element of order $2n$.
When $a$ and $b$ are both at least $3$ then $a+b \leq n/3 + 3$ (maximize $x+y$ subject to $x,y\geq 3$ and $xy=n$), so $2a+b \leq 2n/3 + 3 \leq n$ since $n\geq 9$. Otherwise we may assume that $a=2$ and $b=n/2$, in which case $2a+b = n/2 + 4 \leq n$ unless $n<8$.
It remains only to convince yourself that $S_n$ does not contain an element of order $2n$ when $n$ is $6$ or a power of a prime. |
H: Proof of exactness at the first two non-zero objects in the ker-coker sequence (snake lemma).
I am reading MacLane's chapter on Abelian Categories and I am proving the fact, needed for the snake lemma, that the sequence $0\to \text{Ke}f\to \text{Ke}g\to\text{Ke}h$ is exact at $\text{Ke}f$ and $\text{Ke}g$, where $\text{Ke}f$ denotes the domain of the kernel of $f$. MacLane says it follows by an easy diagram chase, but the solution I came up with, involves very simple ideas, yes, but I feel it is a bit lengthy. I just wanted to post what I did to ask if this is the "correct"/expected way to do it.
(I would have added the required diagrams, since it would make life easier to anyone reading, but I do not know how, so please excuse me - Thus I also need to describe all notations).
So, consider two short exact sequences $<m:a\to b,e:b\to c>$, $<m':a'\to b',e':b'\to c'>$ and a morphism $<f:a\to a',g:b\to b',h:c\to c'>$ between them. I also denote the maps between the domains of the kernels by $m_0:\text{Ke}f\to \text{Ke}g$ and $e_0:\text{Ke}g\to \text{Ke}h$.
Firstly, $e_0\circ m_0=0$. Indeed, the upper two squares are commutative, and thus $\text{ker}h\circ e_0\circ m_0=e\circ m\circ\text{ker}f=0$, which implies $e_0\circ m_0=0$, since $\text{ker}h$ is monic.
Next, $m_0$ is monic. Take $x_*\in_m \text{Ke}f$ and suppose that $m_0\circ x_*\equiv 0$. By commutativity, we have $\text{ker}g\circ m_0=m\circ \text{ker}f$, and thus, since $m_0\circ x_*\equiv 0$, we get $m\circ \text{ker}f\circ x_*\equiv 0$. But both $m$ and $\text{ker}f$ are monic, which imply that $x_*\equiv 0$.
Finally, exactness at $\text{Ke}g$. Take $y\in_m \text{Ke}g$, and suppose that $e_0\circ y\equiv 0$. Then, since $\text{ker}h\circ e_0=e\circ \text{ker}g$, we get that $\text{ker}h\circ e_0\circ y=e\circ \text{ker}g\circ y$, and thus $e\circ \text{ker}g\circ y\equiv 0$. Now, by exactness at $b$, there exists $x\in_m a$ such that $m\circ x\equiv \text{ker}g\circ y$. This means that there exist epis $u$ and $v$ such that $\text{ker}g\circ y\circ u=m\circ x\circ v$. Next, we observe that $m'\circ f\circ x\circ v=g\circ m\circ x\circ v=g\circ\text{ker}g\circ y\circ u=0$, and since $v$ is epi we get $m'\circ f\circ x=0$. But $m'$ is monic and so $f\circ x=0$. Thus, $x$ factors through $\text{ker}f$ and therefore there exists $t:\text{dom}(x)\to \text{Ke}f$ such that $x=\text{ker}f\circ t$. Now, $\text{ker}g\circ y\circ u=m\circ x\circ v=m\circ \text{ker}f\circ t\circ v=\text{ker}g\circ m_0\circ t\circ v$, and thus, since $\text{ker}g$ is monic, we get $y\circ u=m_0\circ t\circ v$, or $y\equiv m_0\circ t$, as desired.
AI: Yes, it all looks good, and written in details, this is indeed the straightforward proof that uses the 'generalized element' concept. |
H: Simplifying a quotient of complex numbers
Given the equation I am supposed to simplify :
$$\frac{(7 - 4i)}{(5 + 3i)}$$
I conclude that I should first multiply both the numerator and denominator by $(5 - 3i)$ (note : or by $7 + 4i$ but either will do), which leads me to :
$$\frac{(35 - 41i + 12i^2)}{(25 - 9i^2)}$$
However, none of the answers in the multiple choice answer sheet agree with my solution. I stare at the answer sheet for half an hour wondering how is it that none of the answers given contains a term containing $i^2$. Just looking at the problem I know that I should expect the last term to be the product of $-4i$ and $3i$ which should yield $-12i^2$. So I start pulling my hair out and eventually I find wolfram's algebra simplifier and I run the expression through them to get a step by step walkthrough and find that when they multiply the last terms for both numerator and denominator they ignore the i and invert the sign! For example, in the numerator their last term ends up being $-4i * -3i = -12$ (not $12i^2$ according to the rules I learned) and the denominator's last term ends up being $3i * -3i = 9$ (not $-9i^2$). Please, for all that's holy and sacred in the world of algebra, someone explain to me how this magic occurs because I'm about to throw myself out of the proverbial window!
Here's the answer which I don't understand for the life of me :
$$\frac{(23 - 41i)}{34}$$
p.s. If you could recommend me something, I am looking for an Algebra problem book which has challenging equations to simplify with steps and tips/tricks... I really want to nail this.
AI: Hint: $$i^2:=-1$$${}{}{}{}{}{}$ |
H: Find the functions whose length is proportional to the area below them
I'm trying to solve this problem:
"Find all the functions $f : \mathbb{R} \rightarrow \mathbb{R}^+$, $f \in C^1(\mathbb{R})$ such that the area below $f$ in $[a,b]$ is proportional to the length of the graphic of $f$ in that interval"
This is my progress:
I translate the problem into the following equality (taking in account the formula for the length of a function):
$$\int_a^b{f(x)}\mathrm{d}x = K \int_a^b\sqrt{1 + f'(x)^2}\mathrm{d}x$$
We take $\dfrac{\mathrm{d}}{\mathrm{d}b}$ in both sides and using the fundamental theorem of calculus:
$$f(b) = K \sqrt{1 + f'(b)^2}$$
From this last equality, I can see that the solution is going to be the hyperbolic cosine. But I would like to solve it more formally. Also, I think I have not took in account that the $f$ I am looking for is positive.
AI: I would prefer to start by noting that for any $x$ in the interval, we have
$$\int_a^x f(t)\,dt=K\int_a^x \sqrt{1+(f'(t))^2},dt.$$
Using the Fundamental Theorem of Calculus, we see that
$$f(x)=K\sqrt{1+(f'(x))^2}.$$
Write $y$ for $f(x)$. We get
$$\frac{dy}{dx}=\pm\sqrt{y^2-K^2}.$$
We have obtained two closely related separable differential equations. For definiteness pick the first one, rewrite it as
$$\frac{dy}{\sqrt{y^2-K^2}}=dx$$
and integrate. To do the integration on the left, make the substitution $y=K\cosh w$, or, less attractively, $y=K\sec w$.
. |
H: Difficulties with partial integration
I have asked several questions on the site regarding this topic already, but I can't seem to grasp this at all. Consider the following example:
$$ h(x) = e^{2x} \sin x$$
We have to find the integral. I rewrote this to the form:
$$e^{2x} \sin x \space dx = d( -\cos x e^{2x}) - (-\cos x \dfrac{1}{2} e^{2x} \space dx)$$
I continued this:
$$e^{2x} \sin x \space dx = d( -\cos x e^{2x}) - d (- \sin \space x \dfrac{1}{2} e^{2x}) - (- \sin x \dfrac{1}{4}e^{2x} \space dx)$$
I subtracted the last term with the term on the LHS:
$$\dfrac{3}{4} (e^{2x} \sin x) \space dx = d( -\cos x e^{2x}) - d (- \sin \space x \dfrac{1}{2} e^{2x})$$
Apparently this is totally wrong but I have no idea what I did wrong or how it should be done otherwise. Can anyone help me find my errors? And perhaps give some tips?
AI: I've never properly learned the notations with du and dx and dy/dx
and such. Does du mean u'?
If you have a function $u(x)$ of the single variable $x$, the differential $du$ can be seen as the product of the derivative of $u(x)$ with the differential $dx$ of the independent variable, i.e. $du=u'(x)\ dx$.
For a detailed explanation of the notation see this answer.
--
The integration by parts corresponds to the following rule:
$$
\begin{equation*}
\int u(x)v^{\prime }(x)\ dx=u(x)v(x)-\int u^{\prime }(x)v(x)\ dx.
\end{equation*}
$$
We can select the functions $u(x),v(x)$ by using the LIATE rule as in my answer to your second last question or the techniques explained in the answers to your last question LIATE / ILATE rule. We get$^1$:
$$\int e^{2x}\sin x\,dx=\frac{1}{2}e^{2x}\sin x-\int \frac{1}{2}e^{2x}\cos
x\,dx$$
and
$$\int \frac{1}{2}e^{2x}\cos x\,dx=\frac{1}{4}e^{2x}\cos x+\frac{1}{4}\int
e^{2x}\sin x\,dx.$$
Consequently,
$$\begin{eqnarray*}
I &=&\int e^{2x}\sin x\,dx=\frac{1}{2}e^{2x}\sin x-\int \frac{1}{2}
e^{2x}\cos x\,dx \\
&=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4}e^{2x}\cos x-\frac{1}{4}\int
e^{2x}\sin x\,dx \\
&=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4}e^{2x}\cos x-\frac{1}{4}I.
\end{eqnarray*}
$$
Solving for $I$ we thus get
$$
\begin{eqnarray*}
\left( 1+\frac{1}{4}\right) I &=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4}
e^{2x}\cos x \\
I &=&\frac{4}{5}\left( \frac{1}{2}\sin x-\frac{1}{4}\cos x\right) e^{2x}.
\end{eqnarray*}
$$
--
$^1$ The first integral can be evaluated as follows. If $u(x)=\sin x$ and $v^{\prime }(x)=e^{2x}$, then $u^{\prime }(x)=\cos x$
and $v(x)=\frac{1}{2}e^{2x}$. The integration by parts yields
$$\int \underset{u(x)}{\underbrace{\sin x}}\,\cdot\underset{v^{\prime }(x)}{\underbrace{e^{2x}}}\ dx=\underset{u(x)}{\underbrace{\sin x}}\,\cdot\underset{v(x)}{\underbrace{\frac{1}{2}e^{2x}}}-\int \underset{u^{\prime }(x)}{\underbrace{\cos x}}\cdot\underset{v(x)}{\,\underbrace{\frac{1}{2}e^{2x}}}\ dx.$$
Remark. As can be seen in AWertheim's answer the opposite selection $u(x)=e^{2x}$ and $v^{\prime }(x)=\sin x$ works too. |
H: What stops me from making this conclusion?
Suppose I want to find $\sin^6x+\cos^6x$. What stops me from saying that $\sin^2t=\sin^6x$, and $\cos^2t=\cos^6x$? Of course this is wrong because $\sin^2t+\cos^2t=1$ and $\sin^6x+\cos^6x$ does not equal 1. So what stops me from making this substitution? The domain $and$ range of the sixth-degree functions are completely within the quadratics.
AI: If you put $\sin^2 t = \sin^6 x$ you are not free to choose a value for $\cos^2 t$ - you have already constrained this by your initial choice. |
H: Definition of $C^k$ boundary
Can someone give me a resonable definition of $C^k$ boundary, e.g., to define and after give a brief explain about the definition.
I need this 'cause I'm not understanding what the Evan's book said.
Thanks!
AI: In $\mathbb{R^n}$, the boundary of a subset is $C^k$ if it's locally the graph of a $C^k$ function in some direction. So a circle has $C^{\infty}$ boundary because at all points in the positive upper half plane, it's the graph of the function $y=\sqrt{1-x^2}$, which has infinitely many derivatives at every point but the two end points. But those end points are in the graph of $x=\sqrt{1-y^2}$ or $x=-\sqrt{1-y^2}$, which also has infinitely many derivatives. |
H: If $\mid \lambda_i\mid=1$ and $\mu_i^2=\lambda_i$, then $\mid \mu_i\mid=1$?
If $|\lambda_i|=1$ and $\mu_i^2=\lambda_i$, then $|\mu_i|=1$?
$|\mu_i|=|\sqrt\lambda_i|=\sqrt |\lambda_i|=1$. Is that possible?
AI: Yes, that is correct. Or, either you could write $1=|\lambda_i|=|{\mu_i}^2|=|\mu_i|^2$, and use $|\mu_i|\ge 0$ to arrive to the unique solution $|\mu_i|=1$. |
H: Is there a great mathematical example for a 12-year-old?
I've just been working with my 12-year-old daughter on Cantor's diagonal argument, and countable and uncountable sets.
Why? Because the maths department at her school is outrageously good, and set her the task of researching a mathematician, and understanding some of the maths they did - the real thing.
So what else could we have done - thinking that we know our multiplication tables and fractions, but aren't yet completely confident with equations which have letters for unknown numbers?
I did think of proving that there are infinitely many primes - we can follow an argument - other suggestions welcome.
And incidentally, tell your local high school to do this ...
AI: Six people at a dinner party is sufficient to ensure that there are either three mutual strangers or three mutual acquaintances. In fact, six is the smallest number that ensures this phenomenon. This is the diagonal Ramsey number $R(3,3)$, and the proof can be demonstrated with a couple pictures and just a dash of the pigeonhole principle. There are lots of directions she could go after understanding $R(3,3)$ (though most of it is not due to Ramsey). |
H: Prove that $U$ is a self adjoint unitary operator
Let $W$ be the finite dimensional subspace of an inner product space $V$ and $V=W\oplus W^\perp $.
Define $U:V \rightarrow V$ by $U(v_1+v_2)=v_1-v_2$ where $v_1\in W$ and $v_2 \in W^\perp$.
Prove that $U$ is a self adjoint unitary operator.
I know I have to show that $\parallel U(x) \parallel=\parallel x \parallel $ but can't proceed from this stage.
AI: $\langle U(x),U(x)\rangle = \langle U(v_1+v_2) , U(v_1+v_2)\rangle = \langle v_1 - v_2, v_1 - v_2\rangle = \langle v_1,v_1\rangle + \langle v_2,v_2\rangle = \langle x,x\rangle$
where last two equalities comes frome the fact that $\langle v_1,v_2\rangle = 0$ |
H: About a condition for a continuous mapping to be open.
The text (Foundations of General Topology, by Pervin, Second edition) says
a (continuous) mapping $f$ of $X$ into $X^*$ is open iff $f(i(E))\subseteq i^*(f(E))$ for every $E\subseteq X$.
EDIT: $i(E)$ is the interior of set $E$ in $X$. Similarly, $i^*$ is the interior operator in $X^*$.
However, shouldn't $f(i(E))=i^*(f(E))$?
Motivation:
$i^*(f(E))$ is an open set. Hence $f^{-1}[i^*(f(E))]$ will be an open set in $X$. This open set $\subseteq i(E)$. Hence, $i^*(f(E))\subseteq f(i(E))$
$i(E)$ is an open set in $X$. Hence, as $f$ is an open mapping, $f(i(E))$ will be an open set in $X^*$. This open set will be $\subseteq i^*(f(E))$. Hence, $f(i(E))\subseteq i^*(f(E))$
From $(1)$ and $(2)$ we conclude that $f(i(E))=i^*(f(E))$.
AI: Let $\newcommand{\Int}{\operatorname{Int}}X = \{ 0 , 1 \}$ be given the anti-discrete (trivial) topology, and let $Y = \{ y \}$ be given its only topology. It is easy to check that the constant mapping $f : x \mapsto y$ is open continuous. However $$f [ \operatorname{Int} ( \{ 0 \} ) ] = f [ \varnothing ] = \varnothing \subsetneq \{ y \} = \operatorname{Int} ( \{ y \} ) = \operatorname{Int} ( f [ \{ 0 \} ] ).$$
As far as your justification goes, while true that $f^{-1} [ \Int ( f [ E ] ) ]$ is open, it may not be a subset of $E$, because $f$ may not be one-to-one. Therefore the claim that $f^{-1} [ \Int ( f [ E ] ) ] \subseteq \Int (E)$ may be false (and it is in the above example). |
H: Subsequential Limits
I'm working through Rudin's PoMA at the moment, and I've been learning about subsequential limits. However, I'm somewhat confused and I have a question, which is more conceptual than an actual exercise.
I know that when a sequence converges the $\lim \space \sup$ and $\lim \space \inf$ are equal to the $\lim$.
But when the sequence diverges to negative or positive infinity, shouldn't the only subsequential limit be negative or positive infinity, respectively?
So my question is: is the $\lim \space \sup/\inf$ concept only useful for sequences that oscillate around values(like $a_n=(-1)^n$) ? Is it ever useful for any other sequences?
AI: Regarding your first question: yes, if the sequence has a limit (even infinity or minus infinity), then the $\limsup$ and $\liminf$ will agree with that.
The reason why $\limsup$ and $\liminf$ are useful is because, for real sequences, they always exist. So many things can be phrased using them, irrespective of whether you have a convergent sequence or not. Something that happens from time to time is that the proof that some limit exists consists in showing that $\limsup=\liminf$. |
H: Expansion of $(z-1)^2 / (z-2)(z^2+1)$ in $z = 2$
I have to expand
$$
f(z) := \frac { (z-1)^2 }{(z-2)(z^2+1)}
$$ around $z = 2$. I wanted to write $f(z) = \frac 1 {z-2} g(z)$ and then expand $g(z)$. Some hints would help a lot :)
AI: You've got a good notion, but I suggest a slightly different approach. First, set$$\frac{(z-1)^2}{(z-2)(z^2+1)}=\frac{A}{z-2}+\frac{B}{z+i}+\frac{C}{z-i},$$ and solve for $A,B,C$. This is called partial fraction decomposition.
Next, note that for $z\neq\pm i$ we can write $$\frac1{z+i}=\cfrac1{z-2+2+i}=\frac1{2+i}\cdot\cfrac1{1-\left(-\frac{z-2}{2+i}\right)}$$ and $$\frac1{z+i}=\cfrac1{z-2+2-i}=\frac1{z-2}\cdot\cfrac1{1-\left(-\frac{2+i}{z-2}\right)}.$$
Now, one of these can be expanded as a multiple of a geometric series in the annulus $|z-2|>\sqrt5$ and the other can be expanded as a multiple of a geometric series in the annulus $0<|z-2|<\sqrt5$. That is, we will use the fact that $$\frac1{1-w}=\sum_{k=0}^\infty w^k$$ whenever $|w|<1$. You should figure out which annulus works for which rewritten version, and find the respective expansions in both cases. That will give you two different Laurent expansions, valid in two different annuli.
We can do something similar for $\frac1{z-i}$. Use that, and the partial fraction decomposition I mentioned above, to get the expansion for $f(z)$ about $z=2,$ in whichever annulus you prefer.
Alternately, you can use the decomposition $$\frac1{z^2+1}=\frac{i}2\left(\frac1{z+i}-\frac1{z-i}\right),$$ find the expansion of $\frac1{z^2+1},$ rewrite $$(z-1)^2=\bigl((z-2)+1\bigr)^2=(z-2)^2+2(z-2)+1,$$ which gives you the expansion(s) of your $g(z),$ etc. |
H: If $X=[x_{ij}]_{n \times n}$ then how prove $X^n=0$
Let $n\in \mathbb N$ and $A_1,A_2,..,A_n$ be arbitrary sets. Now define $X=[x_{ij}]_{n \times n}$ where
$$x_{ij}=
\begin{cases}
1 , & \text{$A_i$$\subsetneq$}A_j \\
0 , & \text{otherwise} \\
\end{cases}.$$
How do you prove $X^n=0$?
Thanks in advance.
AI: Hint: by iteration of the formula $(XY)_{ij}=\sum_{k=1}^nx_{ik}y_{kj}$ for the coefficients of a matrix product, we have
$$
(X^n)_{ij}=\sum_{1\leq i_1,\ldots,i_{n-1}\leq n}x_{ii_1}x_{i_1i_2}\cdots x_{i_{n-1}j}.
$$
For a term of this sum to be nonzero, we need $x_{ii_1}=x_{i_1i_2}=\ldots=x_{i_{n-1}j}=1$, i.e. $A_{i}\subsetneq A_{i_1}\subsetneq \ldots\subsetneq A_{i_{n-1}}\subsetneq A_j$. Can this happen? How many sets in the chain? |
H: Structure of p-adic units
I am trying to understand the structure of the $p$-adic units. I know that we can write
$$\mathbb{Z}_p^\times \cong \mu_{p-1} \times 1 + p\mathbb{Z}_p,$$
where $\mu_n$ are the $n$th roots of unity in $\mathbb{Z}_p$. My question is: what more can we say about the structure of $1 + p\mathbb{Z}_p$? It seems to be topologically generated by $1 + p$, right?
The reason I am interested is in order to understand $\mathbb{Z}_p^\times/\mathbb{Z}_p^{\times 2}$.
AI: For $p$ odd, the $p$-adic exponential in the form $x\rightarrow \exp(px)$ provably gives a topological group isomorphism from $\mathbb Z_p$ with addition to $1+p\mathbb Z_p$ (the logarithm gives an inverse).
For $p=2$, tweaking is required. |
H: Why can't you find all antiderivatives by integrating a power series?
if
$f(x) = \sum\limits^{\infty}_{n=0}\frac{f^{(n)}(0)}{n!}x^n$
why can't you do the following to find a general solution
$F(x) \equiv \int f(x)dx$
$F(x) = \int (\sum\limits^{\infty}_{n=0}\frac{f^{(n)}(0)}{n!}x^n) dx = \sum\limits^{\infty}_{n=0}\frac{f^{(n)}(0)}{n!}(\int x^n dx) = \sum\limits^{\infty}_{n=0}\frac{f^{(n)}(0)}{n!}(\frac{x^{n+1}}{n+1}) =
\sum\limits^{\infty}_{n=0}\frac{f^{(n)}(0)}{(n+1)!}x^{n+1}$
I was wondering about this because I tried this approach to finding the antiderivative $\int e^{x^2} dx$
AI: First of all, $\int f(x) dx$ is a collections of functions, not a function.
You should rather define $F(x) = \int_{t = 0}^x f(t) dt$.
Then, you should be careful about interchanging $\int$ and $\sum_{n=0}^\infty$.
This is true for finite sums, but not always for infinite series of integral (look e.g. Fatou-Lebesgue theorem) — it is the same as interchanging a limit for integration.
Finally, you have to be able to calculate the infinite series you end up with. |
H: Multiplying the long polynomials for $e^x$ and $e^y$ does not give me the long polynomial for $e^{x+y}$
As an alternative to normal rules for powers giving $e^xe^y=e^{(x+y)}$ I am multiplying the long polynomial of the taylor series of $e^x$ and $e^y$. I only take the first three terms:
$$ \left(1+x+\frac{x^2}{2!}+\cdots\right)\left(1+y+\frac{y^2}{2!}+\cdots\right).$$
With this I try to reach
$$1+(x+y)+\frac{(x+y)^2}{2!}+\cdots$$
I collect a lot of terms and take the terms I know to be equal to $(x+y)^2=x^2+2xy+y^2$, but I'm then left with terms like $yx^2/2!+xy^2/2!$ These last terms I cant solve away. Am I doing something wrong or can I just say that I'll keep those for the $+\cdots$ bit?
AI: Those terms belong to the $(x+y)^3$ term. The way to see that is the their total power is 3 a la the binomial theorem. The proper way to do this problem is with Cauchy products. |
H: function in $L^1\setminus L^2$
I'm looking for an example of a function which belongs to the Banach space $L^1$ (i.e $\int|f|< \infty$) but is not in $L^2$ (so $\int|f|^2$ is unbounded).
Does anyone know such a function?
AI: Try $f(x) = \frac{1}{\sqrt{x}}$ on (0,1) |
H: Relationship between frequency and period of harmonic motion (free undamped motion)
From $\frac{1}{8}x''+16x=0$ we obtain $$x=c_1\cos8\sqrt{2}t+c_2\sin8\sqrt{2}t$$ so that the period of motion is $\frac{2\pi}{8\sqrt{2}}=\frac{\sqrt{2}\pi}{8}$ seconds.
From $20x''+kx=0$ we obtain $$x=c_1\cos\frac{1}{2}\sqrt{\frac{k}{5}}t+c_2\sin\frac{1}{2}\sqrt{\frac{k}{5}}t$$ so that the frequency $\frac{2}{\pi}=\frac{1}{4}\sqrt{k/5}\pi$ and $k=320$ N/m. If $80x''+320x=0$, then $x=c_1\cos2t+c_2\sin2t$ so that the frequency is $\frac{2}{2\pi}=\frac{1}{\pi}$ cycles/second.
So I thought that theta represented the frequency of harmonic motion judging from the answer of question #1, but then in the second problem it seems that the period of harmonic motion is represented by theta.
To solve the first problem I did period = 2 pie / theta, but in the second problem I took theta since I thought theta was the frequency of harmonic function, but instead they did theta / 2pie. What does theta represent?
AI: I see no theta ($\theta$) in that problem. I assume you mean theta to be whatever is being multiplied with $x$ in sin(x), cos(x).
Your confusion stems from not distinguishing angular frequency (radians per second, usually represented by $\omega$) and linear frequency (cycles per second, usually represented by $f$). They are related by $\omega=2\pi f$. In an expression of the form $\cos(\omega t)$, omega gives you angular frequency, so to find linear frequency you must divide by $2\pi$.
Intuitively, $f$ is telling you how many full cycles are completed in 1 s (or whatever your unit of time is). However $\omega$ tells you what angle is swept in 1 s. To convert $f$ to $\omega$, you must multiply $f$ by $2\pi$ since there are $2\pi$ radians in a full circle. |
H: How do you solve $w^4=16(1-w)^4$?
Giving you answer in Cartesian form.
$\dfrac{w^4}{(1-w)^4}=16$ Are you supposed to let $w=x+yi$?
$w^4=x^4+4x^3yi-6x^2y^2-4xiy^3+y^4=16$
I then know that you get routes 2,-2,2i,-2i But I dont know how?
I have done $x^4+y^4-6x^2y^2=16$
and $4x^3yi-4xy^3i=0$ => $x^3y=xy^3$ (which doesn't make sense)
Is this the wrong method?
AI: Right away, you should see that $w=0$ doesn't work, so observing that $$16(1-w)^4=2^4(1-w)^4=(2-2w)^4,$$ our problem becomes equivalent to $$1=\frac{16(1-w)^4}{w^4}=\frac{(2-2w)^4}{w^4}=\left(\frac{2-2w}w\right)^4=\left(\frac2w-2\right)^4,$$ meaning we need $$\frac2w-2\in\{\pm1,\pm i\}.$$ Finding those $4$ values of $w$ should then be relatively simple. |
H: Calculus Leibniz' notation
I'm currently doing integration by parts, and I'm finding that the notation is what makes it tough for me. So I looked it up and found that:
$$\int u(x)v'(x)dx= u(x)v(x) - \int u'(x)v(x)dx$$
But the wikipedia said that this above equality is the same as:
$$ \int u dv = uv - \int v du $$
I think I have located my problem: I don't see how these 2 are equal at all. How could for example know that $$\int u(x)v'(x) dx = \int udv$$
I can't really see the equivalence here.
AI: The point is that when Leibniz conceived the derivative, he thought of it as the quotient of two infinitely small numbers $dy$ and $dx$ if $y = f(x)$, for example. In that case, the derivative was really written as $dy/dx$, and since $f'(x)=dy/dx$, we have $dy=f'(x)dx$ as the infinitely small quantity that $y$ varies at a rate $f'(x)$ when $x$ varies the infinitely small quantity $dx$.
However, this stuff isn't rigorous. Indeed, in standard analysis, it is impossible to conceive a number like an infinitesimal, and the use of this even as mere notation may lead to confusion. That's why in the modern language, we simply use the prime notation to denote the derivative. The next best thing to replace the infinitesimals $dy$ and $dx$ is the notion of a differential form; there's so much about them to be said that I won't explain here.
So, in truth, if you use this stuff that's not rigorous you have $dv=v'(x)dx$ and $du=u'(x)dx$ so that we can write:
$$\int u(x)v'(x)dx = u(x)v(x)-\int v(x)u'(x)dx$$
Simply as:
$$\int udv = uv - \int v du$$
Now, understand that this last thing is just a mnemonic rule so that people remember what to do when find an integral like that. The rigorous version is the first formula that's derived directly from the product rule for derivatives.
Indeed, many of the notations regarding one-dimensional integrals that refer to "let $u$ be that, then $du$ is that other thing" and so forth are just rules for you to remember easier what to do. As I've said, those $dx$, $du$ and everything else can have a rigorous meaning as differential forms when you study differential geometry. |
H: Let $A,B$ be $n \times n$ matrices so that $AB = 0$ If $A,B \neq 0$ what do I know about $A$ and $B$?
Let $A,B$ be $n \times n$ matrices so that $AB = 0$ If $A,B \neq 0$ what do I know about $A$ and $B$?
I want to expand my knowledge about matrices arithmetics and so. Supposing the above what do I know about both $A$ and $B$? I think I read somewhere something about the amount of $0$ columns in $A$ and $0$ rows in $B$, but I can't make the connection.
AI: First, you know that both matrices must be singular ($\det = 0$). If we had $A$ nonsingular ($\det A\ne 0$), then $A$ would have an inverse and we'd get $B=A^{-1}(AB)=A^{-1}0 = 0$. [Both Camerons commented this above, but the folks giving official "answers" missed this point.]
More specifically, you know that every vector $y=Bx$ satifies $Ay=0$, so $C(B)\subset N(A)$, where $C(B)$ is the column space (range) of $B$ and $N(A)$ is the nullspace (kernel) of $A$. [$(AB)x = A(Bx)$, so if $y=Bx$ for some $x\in\mathbb R^n$, then $Ay=0$. By definition, $C(B)$ is the set of all vectors that can be written as $Bx$ for some $x\in\mathbb R^n$.] |
H: Long differential equation question?
We have the equation
$$xy'-y=(x+y) \ln \left(\dfrac{x+y}{x}\right)$$
To solve this equation, I first thought about replacing variables, but my friend suggested that I solve this with Lagrange. How can this be solved this Lagrange, because it seems odd to me?
Thank you.
AI: Try a substitution $z=1+\frac{y}{x}$. This substitution gives me an equation $$z^{'}x=z\ln z$$ |
H: Does $\sum_1^\infty\bigr(-\frac{1}{3}\bigl)^n \bigl(\frac{(-2)^n+3^n}{n}\bigr)$ converge?
This a follow-up question about whether or the not the values on the circle of this
Q : Calculate the Radius of convergence of $\sum^\infty_1(x+1)^n\frac{(-2)^n+3^n}{n}$ converges
Mainly I need to check if this one converges:
$$\sum_1^\infty\left(-\frac{1}{3}\right)^n \ \left(\frac{(-2)^n+3^n}{n}\right)$$
Thank you for your help.
AI: This can readily be split into $$\sum_{n=1}^\infty\frac{(2/3)^n}n+\sum_{n=1}^\infty\frac{(-1)^n}n.$$ The left one converges by direct comparison to a geometric series, the right one converges by the alternating series test. |
H: Bayes rule with multiple conditions
I am wondering how I would apply Bayes rule to expand an expression with multiple variables on either side of the conditioning bar.
In another forum post, for example, I read that you could expand $P(a,z \mid b)$ using Bayes rule like this
(see Summing over conditional probabilities):
$$P( a,z \mid b) = P(a \mid z,b) P(z \mid b)$$
However, directly using Bayes rule to expand $P(a,z \mid b)$ doesn't seem to be the right way to start out:
$$P(a,z\mid b) = { P(b\mid a,z)P(a,z) \over P(a)}$$
AI: Note that you didn't apply Bayes' Rule correctly; Bayes' Rule says that:
$$P(X|Y)={P(Y|X)P(X) \over P(Y)}$$
so your denominator should have actually been $P(b)$.
Instead, I will use the definition of conditional probability and multiplication rule (which together imply Bayes' Rule):
\begin{array}{cc}
P(X|Y) =\dfrac{P(X,Y)}{P(Y)} & (1)\\
P(X)P(Y|X) =P(X,Y)=P(Y)P(X|Y) & (2)
\end{array}
Thus, observe that:
$$ \begin{array}{r@{}ll}
P\big( (a,z) \mid b \big) &= \dfrac{P(a,z,b)}{P(b)} & \text{by (1), where } X=a,z \text{ and } Y=b\\
&= \dfrac{P(z,b)P\big(a \mid (z,b) \big)}{P(b)} &\text{by (2), where } X=a \text{ and } Y=z,b\\
&= \dfrac{P(b)P(z \mid b)P\big(a \mid (z,b) \big)}{P(b)} &\text{by (2), where } X=z \text{ and } Y=b\\
&= P(z \mid b)P\big(a \mid (z,b) \big) \\
&= P\big(a \mid (z,b) \big) P(z \mid b) \\
\end{array} $$
as desired. |
H: How far is it true that statements dependent on Axiom of Choice are not constructive.
Axiom of Choice is often used in mathematics to construct various objects, such as basis of $\mathbb{R}$ as a vector space over $\mathbb{Q}$, unmeasurable subset of $\mathbb{R}$, or a non-principal ultrafilter on $\mathbb{N}$.
It is a popular "meta-theorem" that if such construction essentially relies on the Axiom of Choice, then the object cannot be constructed explicitly. Here, by "essentially relying" I mean that it is consistent with ZF that what we are constructing (Hamel basis, ultrafilter, etc.) does not exist; by "explicit construction" I mean one that can be carried out within ZF.
My question is: Is this meta-theorem really true? Certainly, it is not possible to carry out a construction within ZF, and then prove it correct in ZF. However, it struck me that it might be possible that the construction itself fits in ZF, and it is only verification that requires the Axiom of Choice. Of course, it would be very bizarre, but bizarre things do happen. Do we have some strong evidence that they won't occur in this situation? How certain is it that, say, a Hamel basis can't be constructed in ZF?
AI: We know it is impossible to define a Hamel basis for $\Bbb R$ over $\Bbb Q$ (or construct a non-principal ultrafilter, and so on) because it is a theorem:
If $\sf ZFC$ is consistent, then the theory $\sf ZF+\text{There is no Hamel basis for }\Bbb R\text{ over }\Bbb Q$ is consistent.
Since it is consistent with $\sf ZF$ that there is no such basis, it means that we cannot prove from $\sf ZF$ that such basis exists. We have to assume more, in terms of choice.
Remember the completeness theorem tells us that $\sf ZF\vdash\varphi$ if and only if $\varphi$ is true in every model of $\sf ZF$. Similarly for $\sf ZFC$. So given a statement $\varphi$ which is provable from $\sf ZFC$, if we can find a model of $\sf ZF$ where $\varphi$ fails, we can be certain that $\sf ZF$ cannot prove it, and some axiom of choice is needed for the proof.
For example, we know that if $\sf ZFC$ is consistent then we can create a model in which there is no Hamel basis of $\Bbb R$ over $\Bbb Q$. Of course, in that model the axiom of choice fails, but the process itself is consistent. Of course it might be the case that there are no models of $\sf ZFC$ at all, but we still know how to make it work then. We can translate these arguments into syntactic arguments which state the quote text above. |
H: What is the solution to this parametric equation problem?
How can I find the parametric equations of the line passing through the point $(-5,7,-2)$ and perpendicular to both the vectors $(2,1,-3)$ and $(5,4,-1)$?
AI: If you know the parametric equation for a line when a point on it and its direction vector is given, then a hint for finding the direction is trying to find $(2,1,-3)\times (5,4,-1)$, for the cross product produces a vector perpendicular to both these two vectors. |
H: I want to study $\sqrt[n]{n}$ and its behavior.
As I was studying some limit problems, I came across
$$\sqrt[n]{n}$$
and astoundingly found out that the graph of this has a maximum when $n = e$.
I thought there is no way that this is not a famous fact and I am very interested in it. I looked up some words such as "nth roots" or "rational exponents" but I haven't found this fact right away.
Can someone guide me to a link or tell me at least what this expression goes by in order to do a little bit of researching ?
AI: Simply derive to find the maximum, but first note that
$$\sqrt[n]{n} =n^{1/n}=\exp(\log(n^{1/n}))$$
This should explain the connection with $e$. In more detail:
$$\frac{dn^{1/n}}{dn}=n^{1/n}\frac{1-\log n}{n^2}$$
So the derivative is only zero when $\log n = 1 \to \ n=e$. |
H: Solving $f_n=\exp(f_{n-1})$ : Where is my mistake?
I was trying to solve the recurrence $f_n=\exp(f_{n-1})$.
My logic was this : $f_n -f_{n-1}=\exp(f_{n-1})-f_{n-1}$.
The associated differential equation would then be $\dfrac{dg}{dn}=e^g-g$.
if $f(m)=g(m)>0$ for some real $m$ then for $n>m$ we would have $g(n)>f(n)$. Solving the differential equation with seperating the variables gives the solution $g(n)= \mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ for some $c,c_1,c_2$. That solutions seems correct since $e^t-t=0$ has no real solution for $t$ so there are no issues with singularities near the real line.
However $\mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ Does NOT seem near $f_n$ let alone larger than it !!
So where did I make a big mistake in my logic ?
And can that mistake be fixed ?
AI: The problem is that the primitive $\displaystyle\int_\cdot^x\frac{\mathrm dt}{\mathrm e^t-t}$ does not converge to infinity when $x\to+\infty$.
The comparison between $(f_n)$ and $g$ reads
$$
\int_{f_1}^{f_n}\frac{\mathrm dt}{\mathrm e^t-t}\leqslant n-1,
$$
for every $n\geqslant1$. When $n\to\infty$, the LHS converges to a finite limit hence one can be sure that the LHS and the RHS are quite different when $n\to\infty$ and that this upper bound becomes trivial for every $n$ large enough.
Take-home message: to compare the sequence $(f_n)$ solving a recursion $f_{n+1}=f_n+h(f_n)$ and the function $g$ solving the differential equation $g'(t)=h(g(t))$ can be fruitful only when the integral $\displaystyle\int_\cdot^{+\infty}\frac{\mathrm dt}{h(t)}$ diverges. |
H: Indeterminate Limits
I have been studying independently through various online courses and I still have trouble understanding what to do with certain limits. I am hoping for some guidance on the following two problems to help me solve them (I do not need to answer as much as help understanding where I am going).
$$ \lim_{x\to 0} \frac{\sqrt{16 + 4x} - \sqrt{16 - 4x}}{x} $$
$$ \lim_{x\to 0} \frac{\frac{1}{(x + 6)^2} - \frac{1}{36}}{x} $$
I can tell that these are both $ \frac{0}{0} $ indeterminate equations and I can simplify them in a number of ways but I cannot seem to get the 0 out of the bottom of the equation. Any help pointing me in the right direction would be greatly appreciated!
AI: For the second one, you may find the difference of squares formula $(a+b)(a-b)=a^2-b^2$ useful, together with the observations that $$\frac1{(x+6)^2}=\left(\frac1{x+6}\right)^2$$ and $$\frac1{36}=\left(\frac16\right)^2.$$ Use that to rewrite the numerator and see what happens!
Surprisingly enough, we can once again use the difference of squares for the (fixed version) of your first one! We'll be going the other way, though. For $x$ sufficiently close to $0$, we have that $\sqrt{16-4x}$ and $\sqrt{16+4x}$ are both positive, so that in particular, $$\sqrt{16+4x}+\sqrt{16-4x}\ne 0.$$ From there, the difference of squares formula tells us that $a-b=\frac{a^2-b^2}{a+b}$ so long as $a+b\ne 0$. This lets us in particular rewrite $$\sqrt{16+4x}-\sqrt{16-4x}=\frac{(16+4x)-(16-4x)}{\sqrt{16+4x}+\sqrt{16-4x}}=\frac{8x}{\sqrt{16+4x}+\sqrt{16-4x}}.$$ Can you take it from there?
P.S.: You may also find my answer here to be helpful, dealing as it does with the more general case. |
H: Residue Theorem to Compute Integrals of Rational Functions
Any help would be very much appreciated. Thanks.
$$\int_{-\infty}^{\infty}\frac{x^2}{x^4-4x^2+5}dx$$
Integral for the above using Residue Theorem.
AI: Check the singularities of the function, in this case check the roots of the denominator. Consider only the ones that has imaginary part strictly greater then 0. If there are some real roots, you have to apply Jordan's Lemma, on the "deviation" you have to do with your path to avoid them. This is due to the fact that you are defining a particular closed path to trying apply residue theorem (there is a lot of details to fix here, just read any books on complex analysys)
Then verify that $f(z) := \frac{z^2}{z^4-4z^2+5} $ satisfy $|f(z)| \leq \frac{K}{|z|^{1+a}}$ for some,$a >0$. If all the hypothesis are satisfied apply Jordan Lemma (if needed) and,the residue theorem to compute the integral.
the hypothesis on the module assure you that once defined a let's say square of side $m$ when $m \rightarrow \infty $ only the integral on the real axys (which is what are you trying to compute) is giving contribute to the result.
NB to calculate zeroes of the denominator just make a substitution $t=x^2$ and solve as always. Then you have to figure out the order of the poles you find, maybe using the criteria of the limit
Please note that this is only a hint of,how to do a integral of this kind using complex analysis technique. |
H: definition of divisor functions
I have a question about the definition of divisor functions when I was reading primes in tuples by Goldston, Pintz, and Yıldırım:
Let $\omega(q)$ denote the number of prime factors of a squarefree integer $q$. For any real number $m$, we define $d_m(q) = m^{\omega(q)}$. This agrees with the usual definition of the divisor functions when $m$ is a positive integer.
Can anyone tell me why this definition agrees with usual definition of divisor functions?
AI: The identity $d_m(q)=m^{\omega(q)}$ holds only when $q$ is a squarefree integer. If $q=p_1\cdots p_k$ (so that $k=\omega(q)$), then there are precisely $m^k$ ordered $m$-tuples of positive integers $(d_1,\dots,d_m)$ such that $d_1\cdots d_m=q$: exactly one coordinate $d_j$ is divisible by $p_1$, exactly one is divisible by $p_2$, and so on, and these choices can be made independently. |
H: Input size measurement according to polynomial presenation
There's a paragraph in my book (Complexity and cryptography by Talbot and Welsh, chapter 4) that I don't fully understand:
Let $\mathbb Z[x_1,\dots,x_n]$ denote the set of polynomials in n
variables with integer coefficients. [...] We have to be careful about how the polynomials are presented so that we know how to measure the input size. For example, the following polynomial
$$f(x_1,\dots,x_n)=(x_1+x_2)(x_3+x_4)\dots(x_{2n-1}+x_{2n})$$ could clearly be encoded using the alphabet $\Sigma=\{*, 1, 0, x, (, ), +, -\}$with input size $O(n\log n)$. However if we expanded the parentheses, this same polynomial would then seem to have input size $O(n2^n\log n)$.
I don't understand how those input sizes are computed. Please clarify.
AI: A much easier example might even be the case of a single variable. Consider $(1+x)^{1000}$. If I allow the input to be the string "(1+x)^(1000)", it takes just $11$ symbols to represent the polynomial. However,
$$(1+x)^{1000}=\sum_{i=0}^{1000} \binom{1000}i x^i$$
so if I want to encode the polynomial as the array of its coefficients, I would have to store an integer array of length $1001$, and we are not talking about small numbers:
$$\binom{1000}{349} =
21641441731297707077348545845272311836878450545908804528165639558 \\
12217865325857621178201271717138465393830136766111168336839797876 \\
007373244834924296399090470901253455374741637406778878451634116115\\
025377035477150500931579776387615986613131159215927828897815143706 \\
593214626066844000$$
The input sizes in your example (although there's something wrong with $f(x_1,\ldots,x_n)$ containing the variable $x_{2n}$) are computed as follows: You require $\log(n)$ space to represent an index $i\in\{1,\ldots,n\}=:[n]$, by encoding the number in base 2, for instance. This is the space required to encode a single variable from your polynomial ring. You have $O(n)$ factors in that expression, each of them a sum of constantly many two variables, that makes a total of $O(n\log(n))$.
If you expand the expression, it becomes
$$(x_1+x_2)\cdots(x_{2n-1}+x_{2n}) =\sum_{\alpha:[n]\to\{0,1\}} \prod_{i=1}^n x_{2i-\alpha(i)}$$
and you may notice that those are $2^n$ many summands, each of them a product of $n$ variables, each of which requires the aforementioned $\log(n)$ bits to represent. That's $O(2^nn\log(n))$. |
H: Exercise 3.15 [Atiyah/Macdonald]
I have a question regarding a claim in Atiyah, Macdonald. A is a commutative ring with $1$, $F$ is the free $A$-module $A^n$. Assume that $A$ is local with residue field $k = A/\mathfrak m$, and assume we are given a surjective map $\phi: F\to F$ with kernel $N$. Then why is the following true?
Since $F$ is a flat $A$-module, the exact sequence $0\to N \to F\overset\phi\to F\to 0$ gives an exact sequence $0\to k\otimes N \to k\otimes F \overset{1\otimes \phi}\to k\otimes F \to 0$.
I can see that $F$ is a free $A$-module, and that the first sequence is exact. But how does flatness of $F$ tell me something about the second sequence?
Thanks!
AI: From the long exact sequence of Tor the second short exact sequence looks like
$$\text{Tor}_1^A(k,F) \rightarrow k\otimes N \rightarrow k\otimes F \rightarrow k\otimes F \rightarrow 0$$
But $\text{Tor}_1^A(k,F) = 0$ since $F$ is flat. Look at Chapter 2 exercise 24. |
H: Does "=" have to be interpreted as equality?
To put it briefly: In model theory, we are allowed to interpret any relation symbol in any way we like. So why do people seem to require that "$=$" is interpreted as the actual equality?
Let me elaborate a little more. In model theory, as I imperfectly understand it, one starts with an alphabet $\Sigma$ consisting of the allowable function and relation symbols; for instance for ordered fields we could take $\Sigma = \{\cdot,+,<,0,1\}$. To these, we add symbols for variables $x_1,x_2,\dots$ and logical symbols $\vee, \wedge, \forall, \exists, \dots$. Using these, we can form terms (well formed expressions that will describe elements of the set) and sentences (expressions that will either be true or false). If we assume some set $A$ of sentences to be true (axioms), then the set of all their logical consequences, say $T$, is a theory. A theory can be interpreted by first choosing a set $X$ to work on, and then assigning to the function and relation symbols actual functions ($X^k \to X$) and relations ($X^k \to \{\top,\bot\}$). This is to be done in such a way that the axioms are satisfied.
My problem is that the equality seems to be treated in a different way than other relations, and I don't quite see why. As far as I understand, it is normally required to be the "real" identity: $x = y$ means that $x$ and $y$ are the same element of $X$. Is there some reason not to treat "$=$" just as an ordinary relation (with axioms of being an equivalence relation + for each relation symbol axiom "if $x_1 = y_1,\dots,x_k=y_k$, then $R(x_1,\dots,x_k)$ iff $R(y_1,\dots,y_k)$" )? What would go wrong if we did?
(The reason I am asking is mostly that it seems to me that this would make some constructions more elegant (such as the ultraproducts) by eliminating a quotient.)
AI: It is not wrong. We can interpret '$=$' as an arbitrary binary relation symbol, then if we insert the statements to $A$ that it is equivalence relation and that it preserves all other relation and function symbols, then it will be interpreted as a congruence relation $\sim$ on a model $X$, and we can calmly form the quotient model $X/\sim$, which behaves exactly the same way as $X$, and '$=$' will be interpreted as real equality therein. |
H: Let G be a group of order 24 that is not isomorphic to S4. Then one of its Sylow subgroups is normal.
Let G be a group of order 24 that is not isomorphic to S4. Then
one of its Sylow subgroups is normal.
This is the proof from my textbook.
Proof
Suppose that the 3-Sylow subgroups are not normal. The number of 3-Sylow
subgroups is 1 mod 3 and divides 8. Thus, if there is more than one 3-Sylow subgroup,
there must be four of them.
Let X be the set of 3-Sylow subgroups of G. Then G acts on X by conjugation, so we
get a homomorphism $f : G → S(X) \cong S_4$. As we’ve seen in the discussion on G-sets, the
kernel of f is the intersection of the isotropy subgroups of the elements of X. Moreover,
since the action is that given by conjugation, the isotropy subgroup of H ∈ X is $N_G(H)$ (the normalizer of H in G). Thus,
$$ker f = \cap_{H \in X} N_G(H).$$
For H ∈ X, the index of $N_G(H)$ is 4, the number of conjugates of H. Thus, the order
of $N_G(H)$ is 6. Suppose that K is a different element of X. We claim that the order of
$N_G(H) \cap N_G(K)$ divides 2.
To see this, note that the order of $N_G(H) \cap N_G(K)$ cannot be divisible by 3. This is
because any p-group contained in the normalizer of a p-Sylow subgroup must be contained in the p-Sylow subgroup itself (Corollary 5.3.5). Since the 3-Sylow subgroups have prime
order here, they cannot intersect unless they are equal. But if the order of $N_G(H) \cap N_G(K)$ divides 6 and is not divisible by 3, it must divide 2.
In consequence, we see that the order of the kernel of f divides 2. If the kernel has
order 1, then f is an isomorphism, since G and $S_4$ have the same number of elements.
Thus, we shall assume that ker f has order 2. In this case, the image of f has order
12. But by Problem 2 of Exercises 4.2.18, $A_4$ is the only subgroup of $S_4$ of order 12, so we must have im f = $A_4$.
By Problem 1 of Exercises 4.2.18, the 2-Sylow subgroup, $P_2$, of $A_4$ is normal. But
since ker f has order 2, $f^{−1}P_2$ has order 8, and must be a 2-Sylow subgroup of G. As
the pre-image of a normal subgroup, it must be normal, and we’re done.
My Question
I'm just confused about the last part. I kind of got lost when it was explaining how/why $f^{-1}P_2$ has order 8. I'm not really sure how that's related to the kernel of f.
Thank you in advance
AI: It has to do with the fact that every (non-empty) fiber of a homomorphism is a coset of the kernel. That is, if $\varphi:G\to H$ is a homomorphism, and $h\in\operatorname{im}\varphi,$ then the fiber of $h$ under $\varphi$ is the set $$\{g\in G:\varphi(g)=h\},$$ and is a coset of $\ker\varphi$ in $G$. I outline the proof of this fact (from a linear algebra standpoint) in my answer here, and not much changes in the more general case.
Since $\ker f$ has order two, then for any $\sigma\in S_4,$ we have $f^{-1}(\sigma)$ has cardinality either $2$ or $0$. Since we're assuming that $A_4=\operatorname{im}f,$ then for each $\sigma\in A_4$ (and in particular for each $\sigma\in P_2$) we have $f^{-1}(\sigma)$ has cardinality $2$. Since $P_2$ has $4$ elements by the referenced exercise, then $f^{-1}(P_2)$ is a union of $4$ pairwise disjoint sets of cardinality $2$, meaning that $f^{-1}(P_2)$ has order $8$.
Does that help? |
H: Expand function into a Maclaurin's series
The function is given with:
$f(x)=\dfrac{x^{2012}}{(1-x^3)^2}$
I have no idea what to do here and honestly, I don't really understand what a Maclaurin's series is. I know the definition but I don't understand the concept enough to be able to solve problems like this.
Any help would be appreciated.
AI: You are undoubtedly familiar with the geometric series expansion
$$\frac{1}{1-t}=1+t+t^2+t^3+t^4+\cdots,$$
valid for $|t|\lt 1$. Differentiate with repect to $t$. We get
$$\frac{1}{(1-t)^2}=1+2t+3t^2+4t^3+\cdots.$$
Put $t=x^3$. We get the expansion of $\dfrac{1}{(1-x^3)^2}$.
Then multiply term by term by $x^{2012}$.
Remark: In principle, we could also find the series expansion by differentiating our function $f(x)$ repeatedly, and using the fact that the Maclaurin series is
$$\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n,$$
where $f^{(n)}(x)$ is the $n$-th derivative of $f(x)$. That would in this case be quite a bit harder than the approach we took above.
Once we know the Maclaurin series for a few familiar and important functions, most of the time we use manipulations of known series instead of going back to fundamentals. |
H: Why should we take intersection in this order statistics problem
Let $X_1,\dots,X_n$ be a sample of i.i.d RVs, $X_j\sim F$. Denote by $X_{(1)}\le X_{(2)}\le\dots\le X_{(n)}$ the order statistics for the sample. Find the DF of $X_{(1)}$ and $X_{(n)}$.
My take:
$\mathbb{P}(X_{(1)}\le t)=1-\mathbb{P}(X_{(1)}>t)=1-(1-F(t))$. But the solution says $1-(1-F(t))^n$. I suspect they computed as $1-\mathbb{P}(\bigcap_{j=1}^n \{X_{(j)}>t\})$. Why should we take the intersection? Isn't $X_{(1)}\le X_{(2)}\le\dots\le X_{(n)}$, hence if $X_{(1)}>t$ then it follows $X_{(2)},\dots,X_{(n)}>t$?
Similarly, why should we bother taking the intersection: $\mathbb{P}(X_{(n)}\le t)=\mathbb{P}(\bigcap_{j=1}^n \{X_{(j)}\le t\})$? Since $X_{(n)}\le t$ implies $X_{(1)},\dots,X_{(n-1)}\le t$.
I was confused and couldn't find any clue. If anyone could help me explain this, it will be greatly appreciated. Thanks!
AI: The minimum of $X_1,\ldots,X_n$ is more than $t$ precisely if all of $X_1,\ldots,X_n$ are more than $t$. To say that you're at a point where all of several events happen is to say that your at a point in the intersection of the separate sets where those various events happen.
It is true that if $X_{(1)}>t$ then the other order statistics are greater than $t$.
But it is not true that if one of the original observations are greater than $t$, then all of them are greater than $t$. It is those original unsorted observations whose cumulative distribution function you know, and it is also those observations that are independent.
A concrete example: Suppose $n=2$ and $\displaystyle X_1 = \begin{cases} 1 & \text{with probability }1/2, \\ 2 & \text{with probability }1/2. \end{cases}$
Then we have
\begin{align}
\Pr(X_1=X_2=1) & = 1/4 \\
\Pr(X_1=1\ \&\ X_2=2) & = 1/4 \\
\Pr(X_1=2\ \&\ X_2=1) & = 1/4 \\
\Pr(X_1=X_2=2) & = 1/4
\end{align}
So what are $\Pr(X_{(1)}=1)$ and $\Pr(X_{(1)}=2)$?
$\Pr(X_{(1)}\ge1) = \Pr(\text{both}\ge1)=1$.
$\Pr(X_{(1)}\ge2) = \Pr(\text{both}\ge 2)=1/4$.
Certainly it is true that if $X_{(1)}\ge\text{something}$, then $X_{(2)}\ge\text{that same thing}$. But it is not true that if $X_1\ge\text{something}$ then $X_2\ge\text{that same thing}$, so if you want the probability that they're both $\ge$ something, you need to consider both. |
H: Is there a name for this type of relation?
Let $S$ be a set. Let $\sim$ be a binary relation on $S$. Suppose $\sim$ follows these three rules.
$x\sim x$ for all $x\in S$ (reflexivity).
If $x\sim y$, then $y\sim x$ for all $x, y \in S$ (symmetry).
If $x_1\sim x_2 $, $x_2\sim x_3 $, $x_3\sim \cdots$, then there exists an $n>2$ in the indices of this chain of similarities such that $x_1\nsim x_n$.
I'm trying to capture kind of "partial" or "fuzzy" transitivity. I'm not sure if these are quite the right rules and I'd be happy for anyone to refine them. It reminds me of that children's game, "Whisper Down the Lane", where children in a row whisper a message into each others' ears. Eventually, the final message does not resemble the initial one at all, nor probably a few others.
Update: See comments for modifications on this definition.
AI: These rules seem to be inconsistent. For example, $x\sim x\sim x\sim x\cdots$, so the third rule cannot hold in this case.
A relation that seems to meet what you want, on $\mathbb{R}$, is $D_h(x,y)$ where $xD_hy$ if and only if $|x-y|\le h$. It doesn't meet the third condition, but it models the children's game. |
H: Proving divisibility in elementary number theory problem
Find all positive integers n such that $(n+1)\mid(n^2+1)$.
What I have done so far.
I noticed that $ n^2 + 1 = (n + 1 - 1)^2 + 1 = (n + 1)^2 -2(n + 1) + 2$.
Hence, for the relation to be true, we must have that $(n+1)\mid 2$, that is $n=1$.
How would I prove it?
AI: Your argument is wonderful.
Why must $n = 1$? Since the only positive integers that divide $2$ are $1$ and $2.$ So the only solution for positive $n$ such that $(n + 1)$ divides $2$ is when for $n = 1 \implies n + 1$. $\;(n = 0$ would given us $n + 1 = 1 \mid 2$, but $n = 0 \ngeq 1$.)
Done!
Of course, you are using the fact that $\,a\mid (am + an + x) \iff a\mid a(m + n) + x \implies a \mid x,\;$ but that is fairly easy discern, given your argument. |
H: Linear combination (vectors in space)
First of all, sorry if my question is too easy for you guys, and sorry for my por English.. I have serious trouble with vectors haha Can someone please help me?
Given the vectors $$\vec{u} = 4\vec{i}+\vec{j}-3\vec{k}$$ $$\vec{v} = 3\vec{j}+\vec{k}$$ $$\vec{w} = 2\vec{j}+3\vec{k}$$
Justify why $\vec{v}$ can't be expressed as a linear combination of $\vec{u}$ and $\vec{w}$
Thanks!
AI: Taking $$\left|\begin{matrix} 4&1&-3\\0&3&1\\0&2&3\end{matrix}\right|=28\neq 0$$ we find that the this matrix has full rank, i.e. dimension 3. Hence the rows are independent. |
H: Is partition function increasing function?
I have some exercises which require knowing the number of partitions of particular numbers, so I used some python code which I found on internet to compute the values of the partition function for the values I want.
But I noticed that $p(10)=p(12)=57$ and $p(11)=51$ where $p$ is the partition function - this what the program gave me!
Before that, I guessed that the partition function is increasing, but the calculations by the code showed that it's not.
So, is my guess right and the code has an error, or is my guess wrong?
AI: It's definitely increasing; given any partition of $n$,
$$n=a_1+a_2+a_3+\cdots+a_k,$$
we have a corresponding partition of $n+1$, namely
$$n+1=a_1+a_2+a_3+\cdots+a_k+1$$
so that there are at least as many partitions of $n+1$ are there are of $n$.
According to OEIS A000041, the first few values of the partition function are
$$\begin{array}{r|r}
n & a(n)\\\hline
\tt 0 &\tt 1\\
\tt 1 &\tt 1\\
\tt 2 &\tt 2\\
\tt 3 &\tt 3\\
\tt 4 &\tt 5\\
\tt 5 &\tt 7\\
\tt 6 &\tt 11\\
\tt 7 &\tt 15\\
\tt 8 &\tt 22\\
\tt 9 &\tt 30\\
\tt 10 &\tt 42\\
\tt 11 &\tt 56\\
\tt 12 &\tt 77
\end{array}$$
so your program has a bug. |
H: Some problems about the proof of a theorem
There's a theorem in my book (Complexity and cryptography by Talbot and Welsh, chapter 4) where I don't understand some parts of its proof:
THEOREM: Suppose $f \in \mathbb Z[x_1,..., x_n]$ has degree at most $k$ and is
not identically zero. If $a_1,...,a_n$ are chosen independently and
uniformly at random from $\{1,...,N\}$then $$Pr[f(a_1,...,a_n) = 0]
\leq \frac kN$$ Proof: We use induction on n.$\color{blue}{\text{For }
n = 1}$ the result holds since a polynomial of degree at most $k$ in a
single variable has at most $k$ roots. So let $n > 1$ and write $$f =
f_0+ f_1x_1+ f_2x_1^2+\dots+ f_tx_1^t$$ where $f_0,\dots, f_t$ are
polynomials in $x_2, x_3,... x_n$; $f_t$ is not identically zero and
$t \geq 0$. If $t = 0$ then $f$ is a polynomial in $n −1$ variables so
the result holds. So we may suppose that $1 \leq t \leq k$ and $f_t$
is of degree at most $k −t$. We let $E_1$ denote the event
‘$f(a_1,\dots,a_n) = 0$’ and $E_2$ denote the event
‘$f_t(a_2,\dots,a_n) = 0$’. Now $$Pr[E_1] = Pr[E_1 | E_2]
Pr[E_2]+Pr[E_1 | not E_2] Pr[not E_2] \leq Pr[E_2]+Pr[E_1 | not
E_2]\tag{1}$$ Our inductive hypothesis implies that $$Pr[E_2] =
Pr[f_t(a_2,...,a_n) = 0] \leq\frac{k −t}N$$ since $f_t$ has degree at
most $k −t$. Also $$Pr[E_1 | not E_2] \leq \frac tN$$ This is true
because a_1 is chosen independently of $a_2,...,a_n$, so if
$a_2,...,a_n$ are fixed and we know that $f_t(a_2,...a_n) = 0$ then f
is a polynomial in $x_1$ that is not identically zero. Hence f , as a
polynomial in $x_1$, has degree t and so has at most t roots. Putting
this together we obtain $$Pr[f(a_1,...,a_n) = 0] \leq \frac{k −t}N
+\frac tN \leq \frac kN$$ as required.
I don't understand how it holds for $n=1$ and $t=0$.
Where did the formula $(1)$ come from? I don't get it.
AI: For your first question, notice that the given proof doesn't actually have a case "$n=1$ and $t=0$", because the case $n=1$ is treated separately, and $t$ is introduced only in the argument for $n>1$. If you nevertheless want to consider $n=1$ and $t=0$, then you're dealing with a constant polynomial (because $t=0$) of one variable (because $n=1$). Furthermore, the constant isn't $0$, because of the hypothesis in the theorem that $f$ isn't identically zero. So $f$ is a non-zero constant, and the probability that $f(a_1,\dots,a_n)=0$ is zero, as the theorem claims.
For your second question, equation (1) comes from combining three facts. First, the event $E_1$ is the disjoint union of the two events $E_1\cap E_2$ and $E_1\cap\sim E_2$ (where I'm using $\sim$ for "not"). Second, by definition of conditional probability,
$$
Pr[E_1\cap E_2]= Pr[E_1\mid E_2]\cdot Pr[E_2],
$$
and similarly with $\sim E_2$ in place of $E_2$. That gives the equality at the beginning of (1). Third, all probabilities and all conditional probabilities are between $0$ and $1$, inclusive. Applying that to $Pr[E_1\mid E_2]$ and to $Pr[\sim E_2]$, you get the inequality at the end of (1).
EDIT, in answer to a comment: The case $n=1$ is covered by the fact from elementary algebra that a polynomial, in one variable, of degree $k$ cannot have more than $k$ roots. So of the $N$ possible choices for $a_1$, at most $k$ make $f(a_1)=0$. So, if $a_1$ is chosen art random from the $N$ possibilities, the probability of getting $f(a_1)=0$ is at most $k/N$.
For $n>1$ and $t=0$, the polynomial $f$, though apparently a polynomial in $n$ variables $x_1$ to $x_n$, doesn't actually involve $x_1$ at all; that's what $t=0$ means. So it's really a polynomial in $n-1$ (or fewer) variables. Since the proof is by induction on $n$, the desired conclusion for this $f$ will be given by the induction hypothesis. |
H: De Morgan's laws in natural deduction?
We are asked to use natural deduction to prove some stuff. Problem is, without De Morgan's law, which I think belongs in transformational proof, lots of things seem difficult to prove. Would using de Morgan's laws be a violation of "In your natural deduction proofs, use only natural
deduction inference rules; i.e., do
not use any transformational laws."? If so, how can I work around de Morgan?
AI: The usual natural deduction introduction and elimination rules for $\land$ and $\lor$, together with the classical rules for negation allow you to derive De Morgan's laws, I.e. to show that from $\neg(\varphi \land \psi)$ you can derive $\neg\varphi \lor \neg\psi$, and vice versa, and the duals. Each of the four proofs is easy and no more than about a dozen lines [Fitch style] or the equivalent [Gentzen style]. They are routine examples, or exercises for beginners.
So it is never really harder to prove something from natural deduction first principles alone rather than from the natural deduction rules augmented with De Morgan's laws as derived rules, it is just a bit longer. Whenever you want to invoke one of De Morgan's laws, just slot in the standard proof routine using the basic natural deduction rules to derive the required instance. What's the problem? |
H: The group $\mathbb{Z}^\mathbb{N}/\mathbb{Z}^{(\mathbb{N})}$ can't be embedded in a product $\mathbb{Z}^A$ for any $A$
How the tittle says I need to prove that:
There isn't a group monomorphism $\psi: \mathbb{Z}^\mathbb{N}/\mathbb{Z}^{(\mathbb{N})} \to \mathbb{Z}^A$ for any $A$
and, of course, this is equivalent to prove that there isn't any $\psi: \mathbb{Z}^\mathbb{N} \to \mathbb{Z}^A$ such that $\ker (\psi) = \mathbb{Z}^{(\mathbb{N})}$.
For this purpose I have tried to put the discrete topology $\tau_D$ on $\mathbb{Z}$ and the product topology $\tau$ on $\mathbb{Z}^\mathbb{N}$ which turn out to be Hausdorff and $\mathbb{Z}^{(\mathbb{N})}$ is dense. So I just need to put a Hausdorff topology on $\mathbb{Z}^A$ for which all linear maps such that $\mathbb{Z}^{(\mathbb{N})} \subset \ker (\psi)$ are continuous to conclude that $\psi$ must to be constant.
I have tried with the product topology as above on $\mathbb{Z}^A$, but I'm stuck proving that linear maps are continuous.
Please don't spoil my question with a different proof if it's possible, because this is my homework. Thank you very much.
I come with a new approach, I'm trying to prove that the topology
$$\{B \subset \mathbb{Z}^A: \psi^{-1}(B) \text{ is open for } \psi: \mathbb{Z}^\mathbb{N} \to \mathbb{Z}^A \text{ linear such that } \mathbb{Z}^{(\mathbb{N})} \subset \operatorname{ker}(\psi)\}$$
is Hausdorff, can you help me? Sorry if I'm being too annoying with this.
AI: Remember that a function into a topological product space is continuous if and only if each of its components (i.e., its compositions with the projection maps f the product to the factors) is continuous. So to prove that all homomorphisms $\mathbb Z^{\mathbb N}\to\mathbb Z^A$ are continuous, it would suffice to prove this for homomorphisms $\mathbb Z^{\mathbb N}\to\mathbb Z$. The good news is that this continuity result is true; the bad news is that it's a nontrivial theorem of Specker. Specifically, for every homomorphism $h:\mathbb Z^{\mathbb N}\to\mathbb Z$, there is a finite $n$ such that $h(x_1,x_2,\dots)$ depends only on the first $n$ components $x_1,\dots,x_n$ of the input $(x_1,x_2,\dots)\in\mathbb Z^{\mathbb N}$. Proofs of this can be found in textbooks on abelian groups, for example Fuchs's "Infinite Abelian Groups" or Eklof and Mekler's "Almost Free Modules", but, as I said, it's not trivial and probably not what was intended by the person assigning this homework.
If you're willing to deviate from the topological approach, I suggest showing that $\mathbb Z^{\mathbb N}/\mathbb Z^{(\mathbb N)}$ has a non-trivial divisible subgroup and that such a subgroup cannot have a monomorphism into $\mathbb Z^A$. |
H: Proving a subset
I need to prove that A ⊂ B if and only if, A ∩ B = A
This seems straightforward to understand and then explain in words, as the two statements are equivalent, but I don't understand how I would correctly explain this in a formal sense?
AI: Hint: to show that $X=Y$ you have to show both inclusions: $X\subset Y$ and $Y\subset X$. |
H: What is the greatest integer function, and how do you integrate it?
$[x]$ denotes the greatest integer $\leq x$. Let $f(x)=[x]$ and let $g(x)=[2x].$
I am having hard time understanding this. What is meant by "greatest integer?" Can anyone refer me to any visual/graphical explanation for $[x]?$
For example, how would we draw the graph for $f(x)=[\sqrt{x}]$?
Also I am a little confused by the including properties such as $[2x]=[x]+[x+1/2]$ and with $[3x],[4x]...$
Lastly, how would you integrate it?
For example, what would be the evaluation of the integral of $\int_1^3{[x]dx}$ and integral of $\int_0^9[\sqrt t]dt$?
In general, I am completely confused by $[x]$ and its meaning and its functional ability. So any intuitive explanation would be much appreciated.
AI: Given a real number $x$, the notation $[x]$ (also often seen as $\lfloor x\rfloor$) literally means "the greatest integer less than or equal to $x$". For example, $1.2$ is not an integer. The integers are
$$\ldots,\;-4,\;-3,\;-2,\;-1,\;0,\;1,\;2,\;3,\;4,\ldots$$
We throw away all the integers that are not less than or equal to $1.2$:
$$\ldots,\;-4,\;-3,\;-2,\;-1,\;0,\;1\hphantom{,\;2,\;3,\;4,\ldots}$$
and take the largest one of them, which is $1$. Therefore, the greatest integer less than $1.2$ is $1$, and we write
$$\lfloor 1.2\rfloor =1.$$
(By the way, take a look at the relevant Wikipedia article.)
Here is a graph of $\lfloor x\rfloor$, from $x=-3$ to $x=3$:
Here is a version where I've added a graph of just the function $x$ itself (in red) and lines indicating the integers (in green).
You can see on this graph that, as we expected, $\lfloor 1.2\rfloor =1$.
Here is a graph of $\lfloor \sqrt{x}\rfloor$, from $x=0$ to $x=25$:
Here is a version where I've added a graph of just the function $\sqrt{x}$ itself (in red) and lines indicating the integers (in green).
Here's a check for this graph: using a calculator, we know that
$$\sqrt{20}\approx 4.47214$$
and so the largest integer that's less than or equal to $\sqrt{20}$ will be $4$.
Using the fact that, for any numbers $a<b<c$,
$$\int_a^c f(x)\,dx=\int_a^bf(x)\,dx+\int_b^cf(x)\,dx,$$
we can break up the range we're integrating over into pieces where $\lfloor\;\;\rfloor$ is constant, and we do know how to integrate constants (note that the endpoints don't contribute the value of the integral; integrating over the set of $x$'s for which $a\leq x\leq b$ will give the same answer as integrating over the set of $x$'s for which $a\leq x<b$, and we denote both of these operations by $\int_a^b$). For example,
$$\int_1^3\lfloor x\rfloor\,dx=\int_1^2\lfloor x\rfloor\,dx+\int_2^3\lfloor x\rfloor\,dx=\int_1^21\,dx+\int_2^32\,dx=1+2=3.$$ |
H: What is the solution for $a^x + bx = c$?
What is the solution for $a^x + bx = c$? Also, can anyone refer me to a good article/book/etc that covers general solution methods for exponential functions?
Thanks,
AI: This can be solved in terms of the Lambert W function, which is defined as the solution for $w$ to $z = w e^w$. I think you'd read about the Lambert W function in a book or chapter with a title like "special functions" that just listed a lot of these various functions that can be used in weird circumstances like this, but I've never actually read one myself because I've never had any need to.
http://en.wikipedia.org/wiki/Lambert_W_function |
H: Transformation to spherical coordinate system
If I have a sphere $T: x^{2}+y^{2}+z^{2}\leqslant 10z$ by transformation to the spherical coordinate system by the:
$
x=r\cos\theta\sin\varphi\\
y=r\sin\theta\sin\varphi\\
z=r\cos\varphi
$
What is values for $\varphi$ I will get?
$0\leqslant\varphi\leqslant\pi$ or $0\leqslant\varphi\leqslant\pi/2$ and why??
Thanks!
AI: $x^2+y^2+z^2 \leq 10z$ is the same as $x^2+y^2+(z-5)^2\leq 25$ so we are dealing with a solid ball of radius 5 centered at $(0,0,5)$.
Naively we can translate this inequality to $\rho^2 \leq 10\rho \cos(\phi)$ so that $\rho \leq 10\cos(\phi)$.
I have drawn a ray emanating from the origin out to the sphere [whose equation is $\rho=10\cos(\phi)$]. The angle $\phi$ should sweep from the $z$-axis down to $\phi=\pi/2$. Notice that at this point $\rho=10\cos(\pi/2)=0$ (so we should stop).
Notice that if you had allowed $\phi$ to continue to go past $\pi/2$, cosine and thus $\rho$ would become negative (which indicates something isn't quite right).
Therefore, the bounds in spherical coordinates are:
$0 \leq \rho \leq 10\cos(\phi)$, $0 \leq \phi \leq \pi/2$, and $0 \leq \theta \leq 2\pi$. |
H: $X$ is an odd number, $Y$ is a natural number more than 36. If $\frac{1}{X}+\frac{2}{Y}=\frac{1}{18}$, find the set $(X,Y)$?
$X$ is an odd number, $Y$ is a natural number more than 36. If $\frac{1}{X}+\frac{2}{Y}=\frac{1}{18}$, find the set $(X,Y)$ ?
Re arranging the given equation, we have,
$\frac{2}{Y}=\frac{X-18}{18*X}$
$\frac{1}{Y}=\frac{X-18}{36*X}$
$Y=\frac{36*X}{X-18}$
$Y=\frac{1*2*2*3*3*X}{X-18}$
Now we know that $X$ is an odd number, therefore $X-18$ must also be an odd number.
By visual inspection we see that, $X-18$ can take values of 1, 3 and 9
i.e. $X-18 = 1 = 3^{0}$
and $X=19$
$X-18 = 3 = 3^{1}$
i.e, $X=21$
$X-18 = 9 = 3^{2}$
i.e, $X=27$
when I see the pattern for $X-18$ to be $3^{0}, 3^{1}, 3^{2}$ I'm tempted to try for $3^{3}, 3^{4},...$
when $X-18=3^3$ we have $X=45$
when $X-18=3^4$ we have $X=99$
I get the answer but whats the reasoning behind this ?
Is there another way to solve this question ?
AI: As you have $Y=\frac{36X}{X-18},$
So, $Y=\frac{36(X-18)+36\cdot 18}{X-18}=36+\frac{3^4\cdot2^3}{X-18}$
also as, $Y>0, X>18$
and as you have identified $X-18=3^r ,0\le r\le 4 $
Alternatively
HINT:
So, $Y=\frac{36X}{X-18}$ which is even $=2Z$(say)
So, we have $$\frac1 X+\frac1 Z=\frac1{18}\implies XZ-18(X+Z)=0$$
$$\implies (X-18)(Z-18)=18^2\implies Z-18=\frac{18^2}{X-18}\text{ which is an integer }$$
Clearly, $(Z-18)(X-18)\ne0\implies X,Z>18$ |
H: Find all positive integers $x$ such that $13 \mid (x^2 + 1)$
I was able to solve this by hand to get $x = 5$ and $x =8$. I didn't know if there were more solutions, so I just verified it by WolframAlpha. I set up the congruence relation $x^2 \equiv -1 \mod13$ and just literally just multiplied out. This lead me to two questions:
But I was wondering how would I do this if the $x$'s were really large? It doesn't seem like multiplying out by hand could be the only possible method.
Further, what if there were 15 or 100 of these $x$'s? How do I know when to stop?
AI: If $p$ is an odd prime, and $a$ is not divisible by $p$, then the congruence $x^2\equiv a\pmod{p}$ has $0$ or $2$ solutions modulo $p$. You have found two incongruent solutions. So you have all of them: all solutions are of the form $x=5+13k$ or $x=8+13k$, where $k$ ranges over the integers.
Actually, finding one solution would be enough, for if $x$ is a solution, automatically so is $-x$.
For prime $p$, there are good algorithms for computing solutions of $x^2\equiv a \pmod{p}$, that are feasible even for enormous $p$.
If the modulus is not prime, things get more complicated. Suppose that $m$ is an odd number $\gt 1$. Let the number of distinct prime divisors of $m$ be $e$. Then the congruence $x^2\equiv a\pmod{m}$, where $a$ and $m$ are relatively prime, either has $0$ solutions or $2^e$ solutions.
Finding the solutions can be computationally difficult. If $m$ is the product of two distinct primes, then finding the solutions is essentially equivalent to factoring $m$. This is believed to be in general computionally very difficult for enormous $m$. |
H: A simple adjoint operator question
I'm trying to solve this problem:
Let $\Omega$ a bounded open of $\mathbb{R}$, consider the Hilbert real spaces $X = L^2(\Omega)$ and $Y = \mathbb{R}^{2\times 2}$, with the inner products:
$$\langle u,v\rangle_X\ =\ \int_{\Omega}uv\quad\forall\ u,v\in X, \quad\quad \langle A,B\rangle_Y\ =\ \mathrm{tr}(A^TB)\quad \forall\ A,B\in Y,$$
and define the operator $\mathcal{A}:X\rightarrow Y$ by
$$\mathcal{A}(u)\ :=\ \left(\begin{array}{cc}
\int_{\Omega}u & \int_{\Omega}xu\\
\int_{\Omega}xu & \int_{\Omega}x^2u
\end{array}\right),\quad \forall\ u\in X.$$
Show that $\mathcal{A}\in\mathcal{L}(X,Y)$ and compute explicitly the operator $\mathcal{A}^*$ and the spaces $\mathrm{Ker}(\mathcal{A})$, $\mathrm{Im}(\mathcal{A})$, $\mathrm{Ker}(\mathcal{A}^*)$ and $\mathrm{Im}(\mathcal{A}^*)$.
I have already proved that $\mathcal{A}\in\mathcal{L}(X,Y)$ when I consider functionals $F_j(u) = \langle u, x^j\rangle$, $j=0,1,2$ in $X^{\prime}$ and then define
$$\mathcal{A}(u)\ :=\ \left(\begin{array}{cc}
F_0(u) & F_1(u)\\
F_1(u) & F_2(u)
\end{array}\right),\quad \forall\ u\in X.$$
So, I have that
$$\mathcal{A}^*(P)\ =\ P_{11} + (P_{12} + P_{21})x + P_{22}x^2,\quad \forall\ P\in Y.$$
Now, I need help in the last part, I have proved that
$$\mathrm{Ker}(\mathcal{A})\ =\ ^{\circ}\{F_0,F_1,F_2\}\ =\ \{1,x,x^2\}^{\perp}\quad \text{ and }\quad \mathrm{Ker}(\mathcal{A}^*)\ =\ \{P\in Y : P^T = -P\},$$
but I don't know how compute $\mathrm{Im}(\mathcal{A})$ and $\mathrm{Im}(\mathcal{A}^*)$.
Please somebody help me.
Thanks in advance
AI: For the domain of $\newcommand{\A}{\mathcal{A}}\A$ is all $X$, closed range theorem says:
$$
\mathrm{Im}(\A) = \mathrm{Ker}(\A^*)^{\perp} = \{A\in Y: \mathrm{tr}(A^TB)=0,\;\forall B\in \mathrm{Ker}(\A^*)\},
$$
which are all symmetric $2\times 2$ matrices and
$$
\mathrm{Im}(\A^*) = \mathrm{Ker}(\A)^{\perp} = \{u\in X'\simeq X:\int_{\Omega}uv = 0,\;\forall v\in \mathrm{Ker}(\A)\}
$$
which is $\mathrm{span}\{1,x,x^2\}$. |
H: expected number of repeats in random strings from different sized alphabets
The question is just for fun, and I feel like I'm missing a clever way of thinking about it.
suppose that you are on an alien planet, and you are trying to learn their language. You break into one of the aliens houses and get on his computer and print the contents of a file by accident. you have to figure out if this document is written in their language, or if it's just a binary file.
the idea here I think is that the alphabet for the language is smaller than the alphabet for a printed binary file. You would expect many more repeats from the string with the smaller language. after some string length, you would be able to give a pretty good estimate of the number of letters in the alphabet from which the text was written.
anyway, the question is, what is the probability of a string of length $n$ chosen randomly with an alphabet of size $m$ will have $k$ unique letters?
AI: The number of words of length $n$ that use exactly $k$ letters from an alphabet of size $m$ is
$$ {m \choose k} k! \, S_{n,k} = \frac{m!}{(m-k)!}S_{n,k}$$
where $S_{n,k}$ is the Stirling number of the second kind, (you can think the alphabet as a set of $m$ urns, and $n$ numbered balls, each one corresponding to a position in the word). The total number of words is $m^n$. Hence the probabilities are given by
$$ p_k^{n,m} = \frac{m!}{(m-k)!} \frac{S_{n,k}}{m^n}$$ |
H: \exists quantifier and being explicit about quantity
I have a question regarding the use of the $ \exists $ quantifier when trying to signify explicit quantities.
Consider
$F:$ the set of all fruits
$A(x):$ x is an apple
$R(x):$ x is rotten
If I wanted to express 'One apple is rotten' would this be logically sound?
$\exists x \in F, \forall y \in F, x \neq y \wedge R(x) \wedge \neg R(y)$
Also how would the above be different from:
$\exists x \in F, \forall y \in F, x \neq y \Rightarrow R(x) \wedge \neg R(y)$
Thanks!
AI: As Asaf pointed out, there are a couple of flaws with how you've formulated your expressions. The following should help:
"At least one apple is rotten." translates to "There is a fruit (call it $x$) that is both an apple and rotten.":
$$
\exists x \in F~~~[A(x) \land R(x)]
$$
"Exactly one apple is rotten." translates to "There is a particular type of fruit (call it $x$) such that for any other fruit (call it $y$), the other fruit can only be an apple and be rotten if and only if the other fruit was actually the particular type of fruit.":
$$
\exists x \in F~~~\forall y \in F,~~[A(y) \land R(y) \iff x=y]
$$ |
H: function lifting on $S^1 \times S^1$
Let $f:S^1 \times S^1 \to S^1 \times S^1$ a continuous function and $p:\mathbb{R}^2 \to S^1 \times S^1: (t,s) \mapsto (e^{2\pi i t},e^{2\pi i s})$ a covering map. if $F: \mathbb R ^2 \to \mathbb R ^2 $ is a lifting of $f \circ p$
prove that there exists $(d_1,d_2),(e_1,e_2) \in \mathbb{Z}^2$ such that for all $n,m \in \mathbb Z$
$$ \forall (t,s) \in \mathbb R^2 \quad F(t+m,s+n) = F(t,s) + n(d_1,d_2) + m(e_1,e_2) $$
AI: What is the question here? Given $ F $, notice that $ F'(t,s) = F(t + 1, s) $ is also a lift. So then $ p \circ F = p \circ F' $, and since the kernel of $ p $ is discrete, and $ F, F' $ are continuous, they $ F - F'$ is a constant element of the kernel. Then you can show the same for the second argument. |
H: Limits and continuous functions
I have always been told that if $f(x)$ is a continuous function at $a$ so that $f(a) = L$, then $\lim_{x\to a}f(x) = L$. Please, could someone explain in detail why this is true?
AI: It is true because of the way we define "continuous" and because of the meaning of $\lim_{x\to a}f(x)=L.$ Which parts of those definitions are you having trouble with?
Ultimately, most courses will define what they mean by $$\lim_{x→a} f(x)=L,$$ then define "$f(x)$ is continuous at $a$" by in effect saying $\lim_{x→a}f(x)=f(a)$ (in whatever form). There's not much more to it than that. |
H: If $\int^{\pi}_0 x f (\sin x) dx = k \int^{\pi/2}_0 f (\sin x) dx$, find $k$.
Problem : If $\int^{\pi}_0 x f (\sin x) dx = k \int^{\pi/2}_0 f (\sin x) dx$, find $k$.
Solution : Period of sine function is $2 \pi$
I don't know whether we can use the period of this function to solve this problem.
AI: Let $ I := \int\limits_0^\pi x f(\sin x)\ dx $. By the substitution $ u \mapsto \pi - x $, we arrive at $$I = \int\limits_0^\pi \left(\pi - u\right) f\left(\sin\left(\pi - u\right)\right) \ du = \int\limits_0^\pi \left(\pi - u\right)f(\sin u)\ du $$Hence, $$ 2I = \int_0^\pi \left(u + \left(\pi - u\right)\right)f(\sin u) \ du = 2\pi \int_0^\frac{\pi}{2} f(\sin u) \ du $$ Hence, the answer is that $ k = \pi $. |
H: When integrating $A/x$, why use the logarithm instead of $x$ raised to a power?
$\int\frac{15}{x}dx$ would be $$15\int\frac{1}{x}dx = 5\ln|x|+c$$
This seems like a silly question but I'm feeling exceptionally dense today. Why would you apply the logarithm rule, why wouldn't raising $x$ to a $-1$ exponent work?
AI: Let $a$ be a fixed positive number. We know by integration that
$$\int_1^a \frac{dx}{x}=\ln a-\ln 1=\ln a.\tag{1}$$
We try to reconcile this with the ordinary formula for the integral of a power. Note that if $w\ne 1$, then
$$\int_1^a x^{-w}\,dx= \frac{1}{1-w}\left(a^{1-w}-1^{1-w}\right).\tag{2}$$
There is an obvious problem in using this when $w=1$, because of the division by $0$. So instead let us take the limit as $w$ approaches $1$ of the expression at the right side of (2).
So, letting $h=1-w$, we want to find
$$\lim_{h\to 0}\frac{a^h -1}{h}.\tag{3}$$
We recognize the limit (3) as the derivative of the function $a^t$ with respect to $t$, at $t=0$. Simce $a^t=e^{(\ln a)t}$, the derivative at $t=0$ is $\ln a$.
This is the same as the answer (1) that we obtained by using the standard formula for $\int \frac{dx}{x}$. So this formula can be thought of as a limiting case of the 'usual" formula (2). |
H: Is there a simple group of any (infinite) size?
I'm trying to show that for any infinite cardinal $\kappa$ there is a simple group $G$ of size $\kappa$, I tried to use the compactness theorem and then ascending Löwenheim-Skolem, but this is impossible as the following argument shows:
Suppose $F$ is a set of sentences in a first-order language containing the language of groups such that for any group $G$ we have $G\vDash F$ iff $G$ is a simple group, let $\varphi$ be a sentence such that $G\vDash \varphi $ iff $G$ is abelian. Then as for each prime $p$, $\mathbb Z_p$ is simple and abelian, by the compactness theorem there is an infinite abelian simple group $G$, and thus using Löwenheim-Skolem we obtain an uncountable simple abelian group $G$. Pick a non-zero element $a\in G$, then $\langle a \rangle$ is a proper normal subgroup of $G$. Contradiction.
Note that the contradiction is not obtained by the existence of an infinite abelian simple group, something which I don't whether it's true or not, but by the assumption of the axiomatization of the simple groups, and the use of Lowënheim-Skolem theorem.
Now, is there a way to prove this assertion?
Thanks.
AI: For any (finite or infinite) cardinal $\kappa$, if $\kappa \ge 5$ then the finitary alternating group $A(\kappa)$ is simple. This is the group consisting of permutations of $\kappa$ that have finite support and are even. (If $\kappa$ is finite then this is just the ordinary alternating group $A_\kappa$.) If $\kappa$ is infinite, then $A(\kappa)$ has cardinality $\kappa$. |
H: Find the smallest positive integer $n$ such that $\sqrt{n}-\sqrt{n-1}\leq \frac{1}{100}$
Find the smallest positive integer $n$ such that $\sqrt n-\sqrt{n-1}\le \frac{1}{100}$.
First I multiplied by the conjugate and got
$$\frac 1{\sqrt n + \sqrt{n-1}} \le \frac{1}{100},$$ or $\sqrt n+ \sqrt{n-1} \ge 100$. Now I squared both sides:
$$2n-1+2 \sqrt n \sqrt{n-1}\ge 100.$$
So $$\sqrt n (\sqrt n +\sqrt{n-1})\ge \frac {101}2.$$ However, we know that $\sqrt n+\sqrt{n-1}\ge 100$, so I substituted that:
$$\sqrt n (100) \ge \frac {101}{2}.$$
However, if I solve for $n$ I get around $.25$, and there is no lower positive integer. What did I do wrong? Thanks!
AI: Let us stop temporarily at
$$\sqrt{n}+\sqrt{n-1}\ge 100.\tag{1}$$
Maybe the algebraic manipulations can stop here.
Note that (1) will certainly be true if $2\sqrt{n-1}\ge 100$, that is, if $n-1 \ge 2500$, But it is conceivable that (1) also holds at $n=2500$. It doesn't. |
H: Clarification on some mathematics formula
In the book I am reading (complexity and cryptography by Talbot and Welsh, chapter 4), there is a theorem on $\textbf{BPP}$ where I don't understand a few steps of its proof, it's totally independent of the concept I am reading, I don't understand the mathematical steps (tagged $(1)$).
Proof:Suppose $x \in L$ , We need to show that for a suitable choice
of $t$, the probability that M rejects $x$ is at most $2^{-p(n)}$ Now
M rejects $x$ iff N accepts $x$ at most $t$ times during the $2t + 1$
computations.Hence: $$Pr[\text{M rejects $x$}] \le \sum_{k=0}^t
\begin{pmatrix} 2t+1 \\ k \end{pmatrix} (\frac {3}{4})^k (\frac
{1}{4})^{2t+1-k} \tag{1}
\\\le \frac {(t+1)}{4} \begin{pmatrix} 2t+1 \\ t \end{pmatrix} (\frac {3}{16})^{t}
\\ \le 2^{2t+1} \frac {(t+1)}{4} (\frac {3}{16})^{t} \le(t+1) (\frac {3}{4})^{t}$$
AI: M rejects $x$ iff N accepts $x$ at most $t$ times during the $2t+1$ computations.
It follows from this that
$$\begin{align*}
\Bbb P[\text{M rejects }x]&=\Bbb P[\text{N accepts }x\text{ at most }t\text{ times}]\\
&=\sum_{k=0}^t\Bbb P[\text{N accepts }x\text{ exactly }k\text{ times}]
\end{align*}$$
Suppose that the probability that $N$ accepts $x$ is $\frac34$. Let $K$ be a particular set of $k$ of the $2t+1$ trials; then the probability that $N$ accepts $x$ on all of the trials in $K$ and does not accept $x$ on any of the remaining trials is
$$\left(\frac34\right)^k\left(\frac14\right)^{2t+1-k}\;.$$
There are $\binom{2t+1}k$ possible sets of $k$ trials, so the probability that $N$ accepts $x$ on exactly $k$ trials is
$$\binom{2t+1}k\left(\frac34\right)^k\left(\frac14\right)^{2t+1-k}\;,$$
and
$$\sum_{k=0}^t\binom{2t+1}k\left(\frac34\right)^k\left(\frac14\right)^{2t+1-k}$$
is the probability that $N$ accepts $x$ at most $t$ times.
Without more context I’m not sure why we have only the inequality
$$\Bbb P[\text{M rejects }x]\le\sum_{k=0}^t\binom{2t+1}k\left(\frac34\right)^k\left(\frac14\right)^{2t+1-k}$$
instead of equality, but this should at least point you in the right direction. |
H: Calculating algorithm sizes
I am reviewing BigO notation from this document: http://www.scs.ryerson.ca/~mth110/Handouts/PD/bigO.pdf
This document contains this information and the accompanying table:
In the following table we suppose that a linear algorithm can do a problem up to size 1000 in 1 second. We then use this to calculate how large a problem this algorithm could handle in 1 minute and 1 hour.
I can see where values such as $n^2$ come from, as it is getting the square root of 1000. However, I am at a loss as to how the author calculated $2^n$ and $n log(n)$. Could someone help determine how these values are calculated?
AI: For $2^n$, you just take the logarithm to the base $2$.
For $n\log n$, you are looking for the largest integer $n$ such that $n\log n$ does not exceed, say, $1000$. You can do that by just trying different values of $n$, zeroing in on the right value; there are more systematic ways, such as Newton's Method --- someone just asked a question here about solving $n\log n\le1,000,000$, maybe you could dig up that question and see what's written there. |
H: How find $\int(x^7/8+x^5/4+x^3/2+x)\big((1-x^2/2)^2-x^2\big)^{-\frac{3}{2}}dx$
How can I compute the following integral:
$$\int \dfrac{\frac{x^7}{8}+\frac{x^5}{4}+\frac{x^3}{2}+x}{\left(\left(1-\frac{x^2}{2}\right)^2-x^2\right)^{\frac{3}{2}}}dx$$
According to Wolfram Alpha, the answer is
$$\frac{x^4 - 32x^2 + 20}{2\sqrt{x^4 -8x^2 + 4}} + 7\log\bigl(-x^2 - \sqrt{x^4 - 8x^2 + 4} + 4\bigr) + C$$
but how do I get it by hand?
AI: Perform the substitutions $y=\frac{x^2}{2}, y=2+\sqrt{3}\sec{\theta}$ to get:
\begin{align}
&\int{\frac{\frac{x^7}{8}+\frac{x^5}{4}+\frac{x^3}{2}+x}{((1-\frac{x^2}{2})^2-x^2)^{\frac{3}{2}}} dx} \\
& =\int{\frac{y^3+y^2+y+1}{(y^2-4y+1)^{\frac{3}{2}}} dy} \\
&=\int{\left(\frac{y+5}{\sqrt{(y-2)^2-3}}+\frac{20y-4}{\sqrt{((y-2)^2-3)^3}}\right) dy} \\
&=\int{\left[\frac{7+\sqrt{3}\sec{\theta}}{\sqrt{3}\tan{\theta}}+\frac{36+20\sqrt{3}\sec{\theta}}{3\sqrt{3}\tan^3{\theta}}\right](\sqrt{3}\sec{\theta}\tan{\theta}) d\theta} \\
&=\int{\left(7\sec{\theta}+\sqrt{3}\sec^2{\theta}+12\csc{\theta}\cot{\theta}+\frac{20}{\sqrt{3}}\csc^2{\theta}\right) d\theta} \\
&=7\ln{|\tan{\theta}+\sec{\theta}|}+\sqrt{3}\tan{\theta}-12\csc{\theta}-\frac{20}{\sqrt{3}}\cot{\theta}+c \\
&=7\ln{|\frac{\sqrt{y^2-4y+1}+y-2}{\sqrt{3}}|}+\sqrt{y^2-4y+1}-\frac{12(y-2)}{\sqrt{y^2-4y+1}}-\frac{20}{\sqrt{y^2-4y+1}}+c \\
&=7\ln{|\frac{\sqrt{y^2-4y+1}+y-2}{\sqrt{3}}|}+\frac{y^2-16y+5}{\sqrt{y^2-4y+1}}+c \\
&=7\ln{|\sqrt{x^4-8x^2+4}+x^2-4|}+\frac{x^4-32x^2+20}{2\sqrt{x^4-8x^2+4}}+c'
\end{align}
where $c,c'$ are arbitrary constants of integration. |
H: Does the analog of homological algebra studying maps where, say, $d \circ d \circ d = 0$ have a name?
I don't have an application in mind or anything; I'm just curious.
We can think about homological algebra as the study of endomorphisms $d$ such that $d \circ d = 0$. Most of homological algebra seems to follow from this condition in an almost mechanical way. Naturally this leads me to wonder exactly how specifically important the condition $d \circ d = 0$ is, rather than conditions like $d \circ d \circ d = 0$ or more generally $d^n = 0$.
Have such things been studied? Do they have a name, or some applications?
AI: This MathOverflow thread covers the same question and has some good references. |
H: Find all $f(x)$ if $f(1-x)=f(x)+1-2x$?
To find one solution I assumed that $f$ is even and rewrote this as $f(x-1)-f(x)+2x=1.$ By just thinking about a solution, I was able to conclude that $f(x)=x^2$ is a solution. However, I am sure that there are more solutions but I don't know how to find them.
AI: HINT:
As $f(x)-x=f(1-x)-(1-x),$ put $f(x)-x=g(x)$
I'm tempted to add this :
(As IvanLoh has pointed out), if we need $f(x)$ in polynomials
As $f(x)-f(1-x)=2x-1$ which is $O(x^1), f(x)$ can be at most Quadratic
Let $f(x)=ax^2+bx+c$
$\implies 2x-1=f(x)-f(1-x)=ax^2+bx+c-\{a(1-x)^2+b(1-x)+c\}$
$\implies 2x-1=-(a+b)+2(a+b)x^2$
Equating the constants $a+b=1$
and equating the coefficients of $x,a+b=1\implies b=1-a$
So, any $f(x)=ax^2+(1-a)x+c$ will satisfy the given condition |
H: Error in Neukirch's "Algebraic Number Theory"?
I found what I believe is an error in Neukirch's book, in Chapter 1 Section 3 (Ideals). Exercise 5 states
The quotient ring $\mathcal{O}/\mathfrak{a}$ of a dedekind domain by an ideal $\mathfrak{a} \neq 0$ is a principal ideal domain.
I believe I can prove every ideal in the quotient $\mathcal{O}/\mathfrak{a}$ is principal, but this need not be a domain, correct? (i.e. $\mathbb{Z}/(4)$ )
This seems obvious, but I just need some validation otherwise I'll feel crazy.
AI: You're right. In this situation, the quotient ring is necessarily a principal Artinian ring, but not necessarily a domain. See Pete L. Clark's answer here. |
H: Problem understanding a proof
In the book I am reading (complexity and cryptography by Talbot and Welsh, chapter 4), there's this example:
Choosing an integer $a \in_R \{0,\dots,n\}$ using random bits.
We assume that we are given an infinite sequence of independent random
bits. To choose a random integer $a \in_R {0, . . . , n}$ we use the
following procedure (we suppose that $2^{k−1} ≤ n < 2^k$ ),
read $k$ random bits $b_1, . . . , b_k$ from our sequence. If $a = b_1, \dots,
b_k$ belongs to $\{0, . . . , n\}$ (where a is encoded in binary)
then output a
else repeat.
On a single iteration the probability that an output is produced is
$$Pr[a \in \{0, . . . , n\}] = \dfrac{n + 1}{2^k} > \frac 12$$
Thus the expected number of iterations before an output occurs is less than two
and, with probability at least $1 − 1/2^{100}$, an output occurs within a hundred
iterations.
Moreover when an output occurs it is chosen uniformly at random from
$\{0, . . . , n\}$. Since if $m \in \{0, . . . , n\}$ and we let $a_j$ denote the value of a chosen
on the j-th iteration of this procedure then
$$Pr[\text{Output is m}] \\= \Sigma_{j=1}^{\infty}Pr[a_j = m \text { and } a_1, . . . , a_{j−1} \geq n + 1]\\= \frac{1}{2^k}\Sigma_{j=0}^{\infty}(1-\dfrac{n+1}{2^k})^j \tag{1}\\= \frac{1}{n +1}$$
I have problem understanding two things:
Why is this: "Thus the expected number of iterations before an output occurs is less than two"?
The formul tagged $(1)$ (the third and fourth line)
Thank you.
AI: In case of independent trials, the expected number of repetitions is equal to reciprocal of the probability of success. Since the probability is greater than $\frac{1}{2}$, the expected number of iterations is smaller than $2$.
The formula (1) considers the possibilities of output $m$ being produced eventually. The variable $j$ in the sum goes over possible numbers of steps it could have taken to produce this output. In order to produce $m$ in $j$-th step, all the previous steps must have produced a "bad" number (one which is strictly greater than $n$ and thus rejected) and the last one must have produced precisely our desired number $m$. The rejection probability is $(1-\frac{n+1}{2^k})$ (complement of probability of success) and the probability of getting a specific $k$-bit string is $\frac{1}{2^k}$. Put together, we get a sum $$\sum_{j=1}^\infty \frac{1}{2^k}\left(1-\frac{n+1}{2^k}\right)^{j-1}$$ which can easily be seen to be equal to your sum. Since it's just geometric series, calculating the sum is easy and the $2^k$ terms cancel out. |
H: If $f(a)$ is divisible by either $101$ or $107$ for each $a\in\Bbb{Z}$, then $f(a)$ is divisible by at least one of them for all $a$
I've been struggling with this problem for a while, I really don't know where to start:
Let $f(x) \in \mathbb{Z}[X]$ be a polynomial such that for every value of $a \in \mathbb{Z}$, $f(a)$ is always a multiple of $101$ or $107$. Prove that $f(a)$ is always divisible by $101$ for all values of $a$, or that $f(a)$ is divisible by 107 for all values of $a$.
AI: If neither of the statements "$f(x)$ is always divisible by $101$" or "$f(x)$ is always divisible by $107$" is true, then there exist $a,b\in{\bf Z}$ so that $107\nmid f(a)$ and $101\nmid f(b)$. It follows from hypotheses that
$$\begin{cases} f(a)\equiv 0\bmod 101 \\ f(a)\not\equiv0\bmod 107\end{cases}\qquad \begin{cases}f(b)\not\equiv 0\bmod 101 \\ f(b)\equiv 0\bmod 107\end{cases}$$
Let $c\in{\bf Z}$ be $\equiv a\bmod 107$ and $\equiv b\bmod 101$. Is $f(c)$ divisible by $101$ or $107$? |
H: Question involving Legendre symbols
Let r,p,q be distinct odd primes. Let 4r divide p-q. Show that
(r/p) = (r/q)
Where (a/b) is the Legendre symbol.
I'm sure we are suppose to use the law of quadratic reciprocity. I don't think this question is suppose to be difficult, but I cannot figure it out!
AI: Hint:
$$4r\mid (p-q)\iff p\equiv q\bmod r\quad\text{and}\quad p\equiv q\bmod 4$$ |
H: Closest point in $y = \sqrt{x}$ to the origin is at $x=-1/2$?
When I solve for the point in $y = \sqrt{x}$ closest to the origin using calculus, I get $x = -1/2$. And this is the case for ALL functions $y = \sqrt{x + c}$ using the distance formula $d^2 = x^2 + y^2$.
Why is it that the extreme point is somewhere the function is not even defined? Obviously (at least to me), isn't the closest point to the origin (0, 0)?
EDIT:
Solution:
We want the nearest distance from the origin to some point P in the function $y = \sqrt{x}$. We can use $d^2 = (x2 - x1)^2 + (y2 - y1)^2$ as the optimization function with the constraint that the other point (say, (x2, y2)) be on the curve $y = \sqrt{x}$. We set the other point, say (x1, y1), to the origin and be (0, 0).
When you put them all together, you get $d^2 = (x - 0)^2 + (\sqrt{x} - 0)^2 = x^2 + x$. You get the derivative of that and set it to zero to optimize. $d(d^2)/dx = 2x + 1 = 0$ and so $x = -1/2$.
My question is why is this the case? Is there a way to end up with x = 0 using calculus? Without necessarily JUST looking at the graph.
AI: The square of the distance from the origin is as follows:
$$d^2=x^2+y^2$$
$$d^2=x^2+x$$
$$\frac{d}{dx}(x^2+x)=2x+1$$
In optimizing this, make sure you're considering all the critical points, including the endpoints of the domain (in this case $x=0$), not just the points where the derivative equals $0$. |
H: How can $\frac{x}{\pi}-\frac{n-x}{1-\pi}$ be the correct derivative of $x\log\pi+(n-x)\log(1-\pi)$?
I am learning statistics and come across this calculation for Maximum-Likelihood estimator for the Binomial distribution. I don't understand the step from second to third row where they took the derivative,
my attempt gave me only $\log \left(π\right)$ as result. this is my calculation:
$$\left(x \log \left(π\right)+\left(n-x\right)\log \left(1-π\right)\right)' = \\ = x'\log \left(π\right)+x\log \left(π\right)' + \left(n-x\right)'\log \left(1-π\right)+\left(n-x\right)\log \left(1-π\right)' = \\ \log \left(π\right) + 0 + 0 + 0 = \log \left(π\right)$$
Am I doing this wrong?
AI: Note that in this case, $\pi$ is the variable, not $x$ |
H: How to show this equality of probability on the unit disk
I came up with this problem which I think is so intuitive but fails to give more rigorous and convincing argument.
Let $(X,Y)$ be uniformly distributed in the disk $D:=\{(x,y):x^2+y^2\le 1\}$
For $x\in(-1,1)$ and small $\Delta>0$, $\mathbb{P}(X\in(x,x+\Delta))=\mathbb{P}((X,Y)\in S)$, where S is the shaded area on the diagram.
This equality seems intuitive, straightforward and agrees with common sense. I think what causes the LHS and RHS to be equal is the symmetry of the circle. But how can I convince myself (or "prove" if there is one) more rigorously?
Thanks.
AI: First, let's define $S$ explicitly:
$$
S
= \{(u,v): x < u < x + \Delta, u^2 + v^2 \leq 1 \}
= ((x,x+\Delta)\times \mathbb{R})\cap D.
$$
Then
\begin{align*}
P[X \in (x, x + \Delta)]
&= P[(X,Y) \in (x, x + \Delta)\times \mathbb{R}, (X,Y) \in D] \\
&= P[ (X,Y) \in S].
\end{align*} |
H: Find the tangent line of $\frac{x^2}{y+1}+xy^2=4$ at $y=1$ and where $y
I want to find the tangent line for the function $\frac{x^2}{y+1}+xy^2=4$ at the point $y=1$ and where $y<x$.
First step: finding the point so I inserted y=1 and get : $$x^2+2x-8 \rightarrow x_1=2 ,x_2=-4$$
$x_1 = 2$ satisfies the conditions.
Step two: finding the derivative
$$\frac{2x(y+1)-y'x^2}{(y+1)^2}+2xy\times y'= 0 $$
$$2x(y+1)-y'x^2+2xyy'(y+1)^2=0 \rightarrow y'(-x^2+2xy(y+1)^2)=-2x(y+1)$$
$$y'=\frac{-2x(y+1)}{-x^2+2xy(y+1)^2}$$
What I get after set $y=1,x=2$ is $-\frac{8}{12}$ and the answer is $-\frac{12}{12}$ so I guess I went wrong in the derivation process.
I may have a fundamental error of the implicit function derivation?
I`d like to get some advice.
Thanks.
AI: You have forgotten that the $x y^2$ term requires a product rule when differentiating |
H: Is there a formula in permutations and combinations if we are to find the sum of number of 1's in binary expansion of a number from 1 to n
We are given $N$. Suppose $f(x) =$ number of $1$'s in the binary expansion of $x$.
We have to calculate $f(1) +f(2) +f(3)+ \dots +f(N)$.
So is there a formula for this sum directly in terms of permutations and combinations?
Thanks in advance.
AI: It is not too hard to figure out (but a bit messy!) by noting that the 1's column switches between 0 and 1 for each number which gives the roof of N/2 1's, the 2's column switches between 0 and 1 every two numbers (think about how many 1's this gives), the 4's column switches between 0 and 1 every four numbers, etc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.