Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

Jump to: navigation, search

The Wikipedia Reference Desk covering the topic of mathematics.
WP:RD/MA

Welcome to the mathematics reference desk.
Is there any way I can get a faster answer?
  • Yes, you can search first. Please do this.
  • Search the reference desk archive to see if your question has been asked and answered earlier.
  • Entering search terms in the box to the left may locate useful articles in Wikipedia.
  • Many questions can be immediately answered by a simple google search.
  • Yes, if you need advice or opinions, it's better to ask elsewhere.
    • The reference desk does not answer (and will probably remove) requests for medical or legal advice. Ask a doctor, dentist, veterinarian, or lawyer instead.
    • The reference desk does not answer requests for opinions or predictions about future events. Do not start a debate; please seek an internet forum instead.

How do I word my question for the best results?

  • Include a meaningful title. Do not write "Question" or "Query", but write a few words that briefly tell the volunteers the subject of the question.
  • Include context. Include links to any information that might help to understand your question. Tell us what part of the world your question applies to.
  • Do not provide your contact information. E-mail or home addresses, or telephone numbers, will be removed. You must return to this page to get your answer.
  • Type ~~~~ (four tildes) at the end of your question. This signs your contribution so volunteers know who wrote what in the conversation.
  • If your question is homework, show that you have attempted an answer first, and we will try to help you past the stuck point. If you don't show an effort, you probably won't get help. The reference desk will not do your homework for you.

When will I get an answer?

  • It may take several days. Come back later and check this page for responses. Later posts may add more information. Please, post your question on only one section of the reference desk.
After reading the above, you may
ask a new question by clicking here.
How to answer a question
  • Be thorough. Provide as much of the answer as you are able to.
  • Be concise, not terse. Please write in a clear and easily understood manner. Keep your answer within the scope of the question as stated.
  • Provide links when available, such as wikilinks to related articles, or links to the information that you used to find your answer.
  • Be polite and assume good faith, especially with users new to Wikipedia.
  • Don't edit others' comments, except to fix formatting errors that interfere with readability.
  • Don't give any legal or medical advice. The reference desk cannot answer these types of questions.
 
See also:
Help desk
Village pump
Help manual


Contents

[edit] March 20

[edit] First and Second derivative

h(x)= x \sqrt{9 - x^2}: I need help finding the first and second derivatives. Im confused with the product and chain rule. —Preceding unsigned comment added by Dew555 (talkcontribs) 02:42, 20 March 2009 (UTC)

If you show what you've done so far, we can see where you may be going wrong. It's policy not to just give answers. However, you could start with the simpler case of h(x)= x \sqrt{x}, both by the product rule and, to check, by simplifying first to get the overall power of x. If that's OK, what's the derivative of h(x)= \sqrt{9 - x^2}, using the chain rule? Then put the parts together. →86.132.238.145 (talk) 10:12, 20 March 2009 (UTC)
(Explicit answer by User:Superwj5 removed --131.114.72.215 (talk) 10:18, 20 March 2009 (UTC))
These people will help you if you show you have made some effort to solve your problem. This way they can understand better where is the point where you met difficulties. So please try answering the following:
1. Are you able to apply the product (Leibnitz) formula for the first and second derivative of a function f(x) = xu(x) in terms ofu'(x) and u''(x) ? What do you get?
Now put  u(x):=\sqrt{9 - x^2}.
2. Are you able to write u'(x) and u''(x) for this function? What do you get?
3. Are you able to substitute the result of point 2 into the formula of point 1? What do you get?
--131.114.72.215 (talk) 10:14, 20 March 2009 (UTC)
(edit conlict *3) If you are having difficulty with this problem, then try starting with some easier problems. For example, try differentiating the following functions. If you get stuck, let us know where you get stuck:
  1.  \sqrt x (Requires power rule.)
  2. (x − 1)(2x − 3) (Requires either product rule or power rule.)
  3. cos(x) + x3 (Requires power rule and derivatives of sums.)
  4. sin(x)(cos(x) + x3) (Requires product rule.)
  5.  (\sqrt x) (x^3 + x^2) (\cos (x)) (Requires product rule twice.)
  6. sin(x4 + 2) (Requires chain rule.)
  7.  \sqrt {\sin (x^4 + 2)} (Requires using the chain rule twice.)
  8.  \cos \sqrt {\sin (x^4 + 2)} (...and three times.)
After you understand how to apply the product rule and chain rule separately, then try your original problem.
Maybe some other ref-desk-ers can offer help explaining...? Eric. 131.215.158.184 (talk) 10:17, 20 March 2009 (UTC)

[edit] Polynomial Roots

Is there a formula or method to find the roots of any polynomial? And if yes, what?The Successor of Physics 09:50, 20 March 2009 (UTC)

there are formulas that give you the roots of quadratics, cubics and quartics, and I've no doubt we have articles on them. For the quintic and higher it's a theorem (see Galois theory) that there's no formula that only involves ordinary arithmetic operations and nth roots. But you can solve the quintic with the ultraradical and there are probably higher degree analogues. Of course if an approximation is good enough, you can solve any polynomial with numerical methods...129.67.186.200 (talk) 10:15, 20 March 2009 (UTC)
The articles in question are Quadratic equation#Quadratic formula, Cubic function#General formula of roots.5B8.5D and Quartic function#The general case, along Ferrari's lines. Note that the last two of these are extremely rarely used in practice, because the roots are almost always simpler when written "the roots of polynomial so-and-so" than when written as whatever comes out of these formulas. And if you need the roots for an application, you'll need to compute an approximation sooner or later anyway, and I wouldn't be surprised if it would often take more computer power to calculate all these radicals to the needed precision than to just go for the roots of the polynomial directly with something like Newton's method. The search for these formulas make an interesting piece of mathematics history though. —JAOTC 11:31, 20 March 2009 (UTC)
Indeed. Approximating a radical is no easier than approximating a root of a general polynomial, so if you are going to calculate the root then the radical expressions are useless, it is faster and more precise to approximate the root directly. — Emil J. 13:25, 20 March 2009 (UTC)
To third this opinion: yes, it takes more computation to evaluate an expression in radicals (correctly) numerically than it does to find the root. One major problem is many of these expressions suffer from catastrophic cancellation, so that to get k-bits of accuracy in the final answer, you may need nk-bits of accuracy approximating the radical, where I think it can be very difficult to bound n above. In other words, even with an oracle that spit out the expression in radicals for free, it might still take much much longer than directly applying a root finding technique, especially if the roots are real and not clustered. This is similar to matrix inversion. Even if someone offers you the inverse matrix for free, multiplication by it (correctly) might be more expensive than using a standard linear solver. JackSchmidt (talk) 19:00, 20 March 2009 (UTC)
See solvable group and Abel's impossibility theorem. --PST 02:19, 21 March 2009 (UTC)
Hey, let me tell you something: I HATE APPROXIMATIONS!!!!!!!!!!!!!!!!!!!!The Successor of Physics 17:17, 21 March 2009 (UTC)
Unless the polynomial can be solved in radicals (ie. the polynomial group is solvable), approximations are pretty much all you can have (either that, or really weird notation that doesn't help you much). Even when there is a solution in radicals, it may be extremely complicated, so doesn't help much anyway. In many situations it's best to just not solve the polynomial at all and just work with it as it is, when you do need to solve it approximations are often best. --Tango (talk) 17:44, 21 March 2009 (UTC)
And the radicals don't help you get any more than approximations if you want the answers to be expressed numerically. So you're no worse off for not being able to use radicals. Michael Hardy (talk) 19:26, 21 March 2009 (UTC)
Well, yes, "numerical" essentially implies "approximate" (unless it happens to be a rational number with a small denominator). --Tango (talk) 23:54, 21 March 2009 (UTC)

Good question! While there is not a "formula" in the usual sense, one can nevertheless operate on algebraic roots in a fairly arbitrary manner. For instance one can form polynomials in any finite number of roots of polynomials, by a result due to Leopold Kronecker. There are more modern algorithms, although I do not remember the source. Write to me if you are really interested, and I will try to dig them out. Bounding the roots is more straightforward. To the best of my knowledge, an initial bisection followed by Newton's method generally suffices to isolate the roots. 71.182.216.55 (talk) 02:51, 22 March 2009 (UTC)

You can also use the Sturm sequence, I think. 75.62.6.87 (talk) 10:38, 25 March 2009 (UTC)
Thanks!The Successor of Physics 11:30, 26 March 2009 (UTC)

[edit] Question of Cubes

I am trying to prove that 13 ± 2 3 ± ... ± n3=0 for all n of the form 4k-1 or 4k.

To do this, I need what I call start points for the proof. These are n = 11, 12, 15, 16, 19, 20, 23, 24. Once I have these numbers I can easily show that any 16 consecutive cubes can be made equal 0 by a particular arrangement of signs. However, proving the start points cases is something that eludes me. Any suggestions as to the choice of sign in each case for the following expressions:

13 ± 23 ± ... ± 113

13 ±2 3 ± ... ± 123

13 ± 23 ± ... ± 153

13 ± 23 ± ... ± 163

13 ± 23 ± ... ± 193

13 ± 23 ± ... ± 203

13 ± 23 ± ... ± 233

13 ± 23 ± ... ± 243,

in order to make each of them 0? —Preceding unsigned comment added by 143.117.143.13 (talk) 12:34, 20 March 2009 (UTC)

This looks like a perfect task for a computer: finite, numerical, and boring. Algebraist 13:27, 20 March 2009 (UTC)
Yes, just looking at the last case, there are 23 signs which can either + or -. That's 223 possibilities, or some 8.4 million. That's well beyond what a person can do, but easy for a computer to handle. StuRat (talk) 14:56, 20 March 2009 (UTC)
Come on, people. How did mathematicians ever get anything done before there were computers? I found one solution for n=12 and think there are no solutions for n=11.
13 + 23 − 33 + 43 − 53 − 63 − 73 + 83 + 93 − 103 − 113 + 123 = 0.
I promiss to find another one if the original poster explains how his general solution works. Dauto (talk) 18:24, 20 March 2009 (UTC)
Before computers mathematicians had to do a lot more tedious work. I for one am not masochistic enough to go back to those days without good reason. Algebraist 18:33, 20 March 2009 (UTC)
And those cases for n=11 and n=12 are far simpler. The n=11 case only has 1024 possibilities to try, which, while tedious, is at least possible to do by hand. StuRat (talk) 18:40, 20 March 2009 (UTC)
As for the general solution, it works similarly to the case 12 ± 22 ± … ± n2 = 0 which was answered on this desk recently (but I didn't succeed in locating it in the archives): for every n, the sequence of cubes n3, …, (n + 15)3 with signs +−−+−++−−++−+−−+ sums to zero. — Emil J. 18:37, 20 March 2009 (UTC)
For those of you saying it's impossible to do it by hand, I got a solution for n=24, and I used only a hand held calculator
13 + 23 + 33 + 43 − 53 − 63 + 73 + 83 − 93 + 103 + 113 + 123 − 133 + 143 + 153 + 163 + 173 + 183 + 193 − 203 − 213 − 223 + 233 − 243 = 0
Dauto (talk) 20:31, 20 March 2009 (UTC)
So how did you do it ? Trying every combo obviously isn't the answer. StuRat (talk) 21:48, 20 March 2009 (UTC)
No, trying every combination would take too long. First, the number of solutions probabily grows quite fast as well and all we need is one solution for each value of n. Second (and more important) there are some heuristics that can speed things up considerably. During the seach, anytime that a partial sum (or subtraction) exceeds the sum of the remaining terms (the ones that were not yet included in the partial sum) we simply skip that branch of the search since no combination of signs of the remaining terms will be able to cancel the partial sum. That saves an incredible amount of time since the terms distribution is quite skewed, as long as we start the search by fixing the signs of the larger terms first and work our way down to the smaller terms. That alone allows the solution for n=12 to be found in just a few minutes. Dauto (talk) 04:49, 21 March 2009 (UTC)


Yes how did you do? Well, some nights ago I was asleep and made some of these computation on my computer, just out of curiosity... so: no solutions for n<12, and only two for n=12: one is Dauto's, the other is it with inverted signs. For the cases n=15 and n=16, we have solutions for free with the 16 signs written by EmilJ, because we can start the sequence either with 0 or 1. To make it short, it came out that there are solutions for any \scriptstyle n=4k-1,\, n=4k,\, 12\leq n\leq 27. I did not made the program write them though, just the number of solutions. The day after I remembered of the Sloane's OEIS, and made a search with the sequences of the number of solutions relative to sums of squares and cubes: there was neither, so I sent them (after all, it contains such sequences like a(n)=100n+1, so it may well have these ones too). However, the number of solution to \scriptstyle\pm 1\pm 2..\pm n=0 was there: [1], with some interesting links. One is to S.R.Finch page, where he finds in a quick and nice way the asymptotics for the number of solutions, by means of the central limit theorem; it seems that the computation extends to the case of sums of r-powers: luckily, giving the same result one gets by the expansion of the integrals.
By the way, note that the 16 signs in the sequence above are those of the expansion of the polynomial \scriptstyle P(x):=(1-x)(1-x^2)(1-x^4)(1-x^8)=\sum_{j=0}^{15}\varepsilon_j x^j. Since (1 − x)4 divides \scriptstyle P(x), if S is the shift operator, acting on sequences by \scriptstyle (Sx)_k=x_{k+1}, then the linear operator \scriptstyle P(S)=\sum_{j=0}^{15}\varepsilon_j S^j contains the fourth discrete difference operator \scriptstyle (S-I)^4 as a factor: and this is the reason why it killes the sequence of cubes, and in fact, all polynomial sequences of degree <4. Of course, this argument generalizes for any degree r, so we have solutions to \scriptstyle\pm 1^r\pm 2^r\pm.. \pm n^r for all n = 2r + 1k and n = 2r + 1k − 1. It seems difficult to say what's the least n=n(r) for which there is a solution; it could be much less than the bound n(r) < 2r + 1.... --pma (talk) 21:54, 20 March 2009 (UTC)

Here's some Python code to solve this using dynamic programming:

numzerosumcubes = {0:1}
for i in range(1,25):
    replacement = {}
    for sum,frequency in numzerosumcubes.iteritems():
        replacement[sum - i**3] = replacement.get(sum - i**3,0) + frequency
        replacement[sum + i**3] = replacement.get(sum + i**3,0) + frequency
    numzerosumcubes = replacement
    print "Number of solutions for n =",i,": ", numzerosumcubes.get(0,0)

It finds solutions for all n congruent to 0 or -1 mod 4, starting at 15, and outputs the number of solutions for each n. —David Eppstein (talk) 22:11, 20 March 2009 (UTC)

[edit] Intrinsic Equations

Given the intrinsic equation of a curve s = f(ψ), how do you find the Cartesian form of the equation? 92.3.124.220 (talk) 22:22, 20 March 2009 (UTC)

Here ψ is the angle between the tangent to the (planar) curve and the x-axis, I suppose. Write it as ψ = g(s). Then you shold find a cartesian parametrization by arc-length ( x(s),y(s) ), where x(s) and y(s) come from the two differential equations x'(s) = cos(g(s)) and y'(s) = sin(g(s)). Check here also. --84.221.81.199 (talk) 09:58, 21 March 2009 (UTC)


[edit] March 21

[edit] Looking for a reference

A document I'm reading has the following passage (italics are mine):

"...laboratory controls must include the establishment of scientifically sound and appropriate specifications, standards, sampling plans and test procedures to assure that components and products conform to appropriate standards. One example of a scientifically sound statistical sampling and analytic plan is based on a binomial approach (see Table 1: Product Performance Qualification Criteria for the Platelet Component Collection Process). The sampling sizes described in Table 1 will confirm with 95% confidence a < 5% non-conformance rate for pH and residual WBC count, and < 25% non-conformance rate for actual platelet yield.

However, other statistical plans may also be appropriate, such as the use of scan statistics."

Do we have an article on scan statistics that is under another name? If not, does anyone know of a concise introduction to the principles? I see books on Amazon, but I'm looking for a not-too technical overview (i.e. less than ten pages in length). SDY (talk) 19:54, 21 March 2009 (UTC)

There is a short (and IMHO not very good) introduction to scan statistics in the "Guide to the preparation, use and quality assurance of blood components" published by the Council of Europe. Basically, it is binomial statistics with a small twist: instead of considering your samples independently, you look at a "moving window". Say your sample size is n. When you do a new quality control, you look at the set of the n-1 previous quality controls plus your new one. When setting up a scan statistics-based QC program, you need to know (1) the baseline error rate that is considered acceptable, and (2) the error rate at which you want the test to indicate a quality failure. Next, you need to find a combination of a moving window size, and a maximum acceptable number of failed tests within such a window, that result in a low false-positive rate and a high probability of detecting a quality failure. The chapter presents a table with some combinations of window sizes and maximum allowed failures per window. Unfortunately, the accepted error rates in that table are too high for the table to be very useful. In addition to determining a window size and a maximum acceptable number of failures per window, the table requires that you specify a third number, the "universe", corresponding to the number of samples analyzed per year. The false-positive rates in the table are calculated with respect to the "universe", whereas the power of the test to detect quality failure is calculated on a sample-by-sample basis.
Since two consecutive samples are not independent (they share n-1 observations), the maths for calculating false positive-rates becomes quite tricky. The chapter refers to this book for the calculations. However, I've heard from a trustworthy source that the false positive-rates in the table were actually calculated by Monte Carlo simulations, and not by the formulae in the book. The probability of detecting a quality failure (power) as presented in the table, was calculated using the cumulative binomial distribution, and is thus easily checked. If you do so, you will see that there is an error in the bottom row of the table. I've done simulations myself, which have shown that the false-positive rates in the table appear to be correct. --NorwegianBlue talk 13:24, 22 March 2009 (UTC)
That helps, thanks. SDY (talk) 15:35, 22 March 2009 (UTC)

[edit] Central Limit Theorem & the Gamma Distribution

Hi there refdesk - I'm trying to show that the limit of the gamma distribution under the following integral:

\frac{\lambda ^n}{(n-1)!}\int_0^{\lambda n}x^{n-1}e^{- \lambda x}dx tends to 0 as n tends to infinity for any positive real lambda, by using the central limit theorem. By taking the limit as n tends to infinity, we should have a normal distribution as follows:

\frac{G - \frac{n}{\lambda}}{\frac{\sqrt{n}}{\lambda}} \to N(0,1) where G is our gamma distribution, so then G \to N(\frac{n}{\lambda},\frac{n}{\lambda^2}), right? Then how do I show that the integral of the normal PDF over [0,λn] tends to 0 as n tends to infinity?

Mathmos6 (talk) 22:48, 21 March 2009 (UTC)Mathmos6

Take λ = 1. Then the integral of the normal pdf tends to 1 as n\to\infty. So there is an error somewhere. 71.182.216.55 (talk) 02:31, 22 March 2009 (UTC)

I figured there must be but I can't spot where - i assume it must be my very first bit with G, but I'm not sure where I've gone wrong... Mathmos6 (talk) 02:48, 22 March 2009 (UTC)Mathmos6

I believe that the initial assertion is wrong. I can do it with n! but not (n-1)!. Check your earlier calculations. 71.182.216.55 (talk) 03:08, 22 March 2009 (UTC)
With an n! it's obvious: the quantity given is a probability, so at most 1, so dividing by n makes it tend to 0. Algebraist 03:13, 22 March 2009 (UTC)
I didn't mean to suggest that my modification of the result was deep. But the result as stated is wrong. Would you care to confirm? 71.182.216.55 (talk) 03:23, 22 March 2009 (UTC)
The limit appears to be 1/2 (for λ=1). I can't see anything wrong with the CLT argument, and the integral of the normal tends to 1/2 (not 1 as stated above). Algebraist 03:27, 22 March 2009 (UTC)
Yes, 1/2 is right (I took the normal distribution over (-n,n) rather than (0,n)). Anyway, we are substantively in agreement that the result is definitely not zero. 71.182.216.55 (talk) 03:38, 22 March 2009 (UTC)

OK, we're looking at this integral:

\frac{\lambda ^n}{(n-1)!}\int_0^{\lambda n}x^{n-1}e^{- \lambda x}\,dx

Now let

 \begin{align} u & = \lambda x \\ du & = \lambda\,dx \end{align}

Then as x goes from 0 to λn then u goes from 0 to λ2n, and already I'm wondering if you didn't intend n/λ instead of λn. If what you wrote is what you intended, then the integral becomes

 \frac{1}{(n-1)!}\int_0^{\lambda^2 n}u^{n-1}e^{-u}\,du.

But if you intended n/λ, then the integral becomes

 \frac{1}{(n-1)!}\int_0^n u^{n-1}e^{-u}\,du.

For a Gamma distribution with expected value n and variance n, this integral is the probability that a random variable with that distribution is between the mean and √n standard deviations below the mean. That doesn't go to 0, but maybe it looks more promising than the other thing. Michael Hardy (talk) 03:43, 22 March 2009 (UTC)

I had considered myself whether n lambda was the correct upper limit for the integral but checked and rechecked and it certainly is - perhaps the question was simply written down wrong? Mathmos6 (talk) 04:55, 22 March 2009 (UTC)Mathmso6

Yes, something is wrong, as they say. But what exactly did you check? Anyway, you may clarify this thing a little if you consider what the central limit theorem actually tells you about a sequence of iid random variables ξ,ξ123,... with exponential distribution law, which was the topic of your problem, as we may reasonably presume (and I assume that it was an exercise, whose text has been corrupted at some point). If ξ has pdf \scriptstyle\lambda\exp(-\lambda x) supported in \scriptstyle\R_+, you should find, by the central limit theorem,
\frac{\lambda^n}{(n-1)!}\int_0^{\frac{n}{\lambda}+\frac{\sqrt{n}}{\lambda}c}x^{n-1}e^{-\lambda x}dx tends to \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{c}e^{-\frac{x^2}{2}}dx,
as n tends to infinity, for any positive real lambda and all \scriptstyle c\in\R. Notice that for λ = 1 and c = 0 we have again the upper limit n in the integral, and we find again the correct limit 1/2 for the special case considered above --pma (talk) 15:27, 22 March 2009 (UTC)
RMK. It is possible that the original integral had an upper bound \scriptstyle\lambda_n, defined somewhere; then the definition has been lost, and successively the unintelligible \scriptstyle\lambda_n has been wrongly corrected into \scriptstyle\lambda n; or that \scriptstyle\lambda_n became \scriptstyle\lambda n after a typo, and consequently the definition of \scriptstyle\lambda_ n was expunged as useless. Still, I can't see where the statement that the limit be 0 comes from. Ignorabimus...  :-(
Oh, but most likely the integral is exactly the one you wrote, only the correct statement is that the limit is 0 for all positive λ strictly less than 1, it is 1/2 for λ = 1, and it is 1 for all λ strictly larger than 1. If so, you just need to use the central limit theorem as written above to conclude. The point is that, no matter what \scriptstyle c\in\R is, we have: \scriptstyle\lambda n < \frac{n}{\lambda}+\frac{\sqrt{n}}{\lambda}c for all large n, respectively, \scriptstyle\lambda n > \frac{n}{\lambda}+\frac{\sqrt{n}}{\lambda}c for all large n, according whether we are in the case 0 < λ < 1 or in the case 1 < λ. This allows to make a comparison of integrals, finding limit (superior) =0 in the former case, respectively, limit (inferior) = 1 in the latter. Does this make sense to you? --pma (talk) 16:19, 22 March 2009 (UTC)
It's just this: if 0 < λ < 1 then for all \scriptstyle c\in\R we have \scriptstyle\lambda n < \frac{n}{\lambda}+\frac{\sqrt{n}}{\lambda}c for all large n, so
\textstyle\limsup_{n\to\infty}\frac{\lambda^n}{(n-1)!}\int_0^{\lambda n}x^{n-1}e^{-\lambda x}dx\leq\lim_{n\to\infty} \frac{\lambda^n}{(n-1)!}\int_0^{\frac{n}{\lambda}+\frac{\sqrt{n}}{\lambda}c}x^{n-1}e^{-\lambda x}dx=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{c}e^{-\frac{x^2}{2}}dx.
Since this holds for all c, and the LHS does not depend on c, we have
\textstyle\lim_{n\to\infty}\frac{\lambda^n}{(n-1)!}\int_0^{\lambda n}x^{n-1}e^{-\lambda x}dx=0
;
analogously, if 1 < λ we find the limit to be 1, and if λ = 1 we simply choose c=0 and find the limit 1/2. --pma (talk) 08:26, 25 March 2009 (UTC)


[edit] March 22

[edit] Value of an L-function

I'm interested in the s-derivative at s=0 of the analytic continuation of

\mu(s)=\sum_{z\in\mathbb{Z}[i]}\frac{}{}\!\!^\prime\frac{1}{|z|^{2s}}.

Up to an overall gamma function and a constant, by taking the proper Mellin transform this is equal to the integral

\mu(s)=\int_0^\infty \theta(x)^2 x^{s-1}\,dx,

where θ is a Jacobian theta function. Does this function have a name? (The Hurwitz zeta function initially seems promising, but doesn't quite do the trick.) I'd like to compute μ'(0), provided it can be done in elementary transcendental functions. 71.182.216.55 (talk) 01:29, 22 March 2009 (UTC)

What is θ here? Algebraist 01:32, 22 March 2009 (UTC)
Sorry, I made some edits in the mean time. 71.182.216.55 (talk) 01:35, 22 March 2009 (UTC)
Anyway, up to constants, \theta(x)=\sum_n e^{-n^2 x}. 71.182.216.55 (talk) 01:40, 22 March 2009 (UTC)

Yeah, so the question is still live, despite recent activity on earlier threads ;-) Is anyone here knowledgable about arithmetic? 71.182.216.55 (talk) 04:08, 22 March 2009 (UTC)

The problem is interesting because it is the zeta function regularization of the determinant of the Laplacian on the flat (square) torus. Despite the fact that the should be the easiest test case of the determinant, it actually appears to me to be non-trivial. (It turns out that disks in symmetric spaces are easier, provided one believes in radial functions.) 71.182.216.55 (talk) 04:43, 22 March 2009 (UTC)

[edit] Birthdays

I am wondering: (A) if there are any actual studies or "real" data to answer my question or, if not, (B) if anyone has any relevant ideas or theories to suggest. My question is this. Statistically speaking, is any one day of the year equally likely to be someone's birthday as any other day of the year? In other words ... does a birthday of January 1 come up with equal probability as a birthday of January 2, January 3, January 4, ... and so on ... until December 31? Perhaps stated another way ... does each day of the year actually have a probability of 1/365? (I assume that, at least theoretically, they do ... right?) Are there any data or studies about this? If not, can anyone think of any ideas / reasons / theories as to why one particular birthday might show up with greater (or lesser) frequency than another? For sake of simplicity and convenience in this question, let's ignore the birthday of February 29. Thanks. (Joseph A. Spadaro (talk) 20:15, 22 March 2009 (UTC))

Google gave me this which, concludes, among other things, that conception is more likely to occur in winter and less in summer. That's for people alive in a specific period in a single country, so it may not reflect general trends accurately. The sample also contains people born in many different years, which smooths out effects like the fact (I don't know if this has a significant effect or not) that doctors are less likely to carry out caesareans and such at weekends. Algebraist 20:32, 22 March 2009 (UTC)
Thanks. Yes, I considered both of those issues. First, "winter" in one half of the world is "summer" in the other half of the world ... so the summer/winter distinction should not affect birthday statistics. Also, for example, August 17 might fall on a weekend in one year, but on a weekday in a different year ... so the weekend/weekday distinction also should not affect statistics. I believe? Thanks. (Joseph A. Spadaro (talk) 20:58, 22 March 2009 (UTC))
The world population is heavily hemispherically-skewed, so the seasonal effects will not obviously balance out. Different cultures and climates might have different seasonal effects, though. Algebraist 21:01, 22 March 2009 (UTC)
And the population of the southern hemisphere is heavily skewed towards the equator. There are very few (I think, almost no) people living more than 45 degrees south, compared to a very large number living more than 45 degrees north. --Tango (talk) 21:20, 22 March 2009 (UTC)
In some cultures you might get people aiming to give birth on certain days, and avoid others. They could therefore take various actions to move the birth date of their child, or perhaps lie about it if they failed to move it. StuRat (talk) 03:18, 23 March 2009 (UTC)


[edit] March 23

[edit] Birthdays Part 2

This is (sort of) a follow-up to my question above. Let us assume that each day of the year is equally likely as any other to be a randomly selected person's birthday. Thus, all of the days of the year have an equal probability of 1/365. (Again, let's "pretend" that February 29 does not exist as a birthday.) If I randomly selected 365 people and, say, placed them in a room ... shouldn't everyone in that room have a different (unique) birthday? In other words, should all 365 days of the year be represented ... or no? Something tells me "no" ... but, why not? Also, if the answer is indeed "no" ... how many people would it, in fact, take to fill a room such that we would insure all 365 birthdays are represented? Thanks. (Joseph A. Spadaro (talk) 04:32, 23 March 2009 (UTC))

The problem of how many people are needed before probably two have the same birthday is the Birthday problem. The problem of how many people are needed before all 365 days appear is the Coupon collector's problem. McKay (talk) 04:38, 23 March 2009 (UTC)
And the direct answer to the second question is that if you have no information about actual birthdays, you could put the entire population of the world in your room [to implement this, I recommend an inversion transformation :-)] and you still couldn't be sure that all be sure that all 365 birthdays are represented. For all you know, it's possible that a February 16 birthday, like North Dakota, does not really exists -- by chance, nobody alive has that birthday. Sure, the probability of that is about (364/365)^6,750,000,000, which is a pretty small number, but you wanted certainty. --Anonymous, 20:25 UTC, March 24, 2009.
And just to have an idea, it's quite unlikely to get all 365 days among the 365 people: around 1 over 10155 --84.220.118.44 (talk) 09:01, 23 March 2009 (UTC)
It hinges on the fact that once you have n people in a room, assuming they have different birthdays, then you have only a (365-n)/365 chance that the next person has a different birthday. As n increases, and assuming that the probabilities are independent, the probability of all people having a different birthday decreases significantly. At around 20 you have a 50% probability of two people with the same birthday. -mattbuck (Talk) 14:03, 23 March 2009 (UTC)

[edit] What do mathematicians think about legal laws?

I have studied legal philosophy, but am an amateur math hobbyist. One of my favorite books, talking about elastic collisions (physics) says that nature does the calculation in a split second and adjust each balls momentum to ensure that total kinetic energy of the system is conserved. Its a fun book that toys with the imagination through thought experiments (like the example).

Here goes my question: the laws that govern our society were written by lawyers, not mathematicians. Do mathematicians "see" the ugliness and discontinuity of laws and does it bother them? Petit theft and grand theft differ only by a penny. If someone is adjudged incompetent they avoid criminal so-n-so, if they are not judged incompetent they are 100% open to charges or whatever (these examples are just generalizations, they probably aren't accurate if a lawyer reads the math desk).

I never got an education in math, nor became a mathematician, but I have the heart and brain of a mathematician (i.e. love to see how numbers are involved in every single facet in life!)

It takes extremely in-depth scrutiny and analysis to show why Newton's laws break down at some ultra-deep level, however a small subset of all the laws are surprisingly poorly thought (afterall, legal "loopholes" are simply finding a flaw or else a unimagined implication of the law's use, etc...) Is there a relevant wikipedia page on any of this? Is there an author who wrote a book on the topic? Does he or she or his/her book have a wiki page?

I sincerely value the excellent answers I get from the ref desk here, and it helps keep my hobby going when I finish a book and don't know what to start on next.

Thank you,

PS, I tried to keep this completely not about the examples I used; if I could describe whatever I'm looking for, I could google it, if it exists. After about 30 minutes of looking on wikipedia, the closest thing I have found is metamathematics but its a mathematical dissection of math, not of legal laws. Maybe the info I'm seeking doesn't exist? 71.54.173.193 (talk) 08:45, 23 March 2009 (UTC)

There are a lot of mathematicians (even more if you just count people with math PhDs). So their viewpoints on politics, philosophy, law, etc. will vary quite a bit; I don't think you will be able to find any one viewpoint that 75% of mathematicians subscribe to.
There is a anecdote that may be interesting to you. Kurt Gödel immigrated to the United States during World War II and took a citizenship test in 1947. When the presiding judge asked Gödel if the U.S. could become a dictatorship like Germany had, Gödel started to explain how his analysis of the constitution found a loophole that would allow it to happen. The judge, who knew to expect that sort of thing, cut him off and moved the hearing along. — Carl (CBM · talk) 12:14, 23 March 2009 (UTC)
That's a good point. Derrick Niederman wrote a good book that let me appreciate the way that mathematicians think outside of their field. He wrote how politicians and the public think that cutting pork-barrel spending is important, when in fact the national debt is $10 trillion and gave an analogy of the two, explaining the difference in orders of magnitude (which I wish I could specifically recall). Certainly mathematicians would try and make laws that pulled from their analytical reasoning skills. Since I presume mathematicians had no input into making our laws, the laws are void of mathematicians' input, and mathematician-authors could surely point out plenty of fine examples. If nothing else, I read a few great pages on philosophy--some good stuff is over in that little niche of wikipedia--not at all a bad way to start a Monday morning. 71.54.173.193 (talk) 13:01, 23 March 2009 (UTC)
An analysis of the British nationality law was done about thirty years ago or so by some mathematicians as a test case of formalizing law and they turned up all sorts of corner cases which weren't covered or caused contradictions. The British parliament issues a law a day practically and doubles the size of the tax mans' manual every few years, it's about 10 or 15 thousand pages now I believe. It is more the sort of field where a data base specialist or artificial intelligence worker might like to potter I think. It's just too messy and awful. And the interactions with other countries, oh dear, my wife just got a form sent back to her saying she had to fill in a field for country as England rather than United Kingdom, she's photocopying it so when they next reject it she can get all the accepted fields right and tweak the rejected field. Dmcq (talk) 15:50, 23 March 2009 (UTC)
In a version of the Gödel anecdote that I've seen more than once, the judge (making casual conversation) said "Of course that can't happen here" and Gödel started to say "Well actually—" but Einstein kicked him under the table. But I haven't seen an account of what Gödel found. —Tamfang (talk) 17:48, 24 March 2009 (UTC)
That's the version that I've heard (and the version that I tell people)... any recollection where you heard it? I recall reading it in a popular mathematics book a number of years ago but I can't recollect which one. The wikipedia article agrees with Carl's explanation. Eric. 131.215.158.184 (talk) 00:37, 25 March 2009 (UTC)
Where you write "Do mathematicians "see" the ugliness and discontinuity of laws and does it bother them?", I must say (for myself anyhow), yes, wholeheartedly. A good example, I feel, is the bicameral system of US congress. We have a House and a Senate, one representing states by population and the other having two representatives from each state, purely as a result of a political compromise between the large and the small states when the constitution was originally written. We had, at the time (and quite possibly today as well), no reason to believe that a bicameral legislature is a better way to run a government than the alternatives. Almost anything we look at in the US constitution arose because of a political compromise, not from ratiocination of the optimal solution, or for any mathematical or game-theoretic reasons. I can hardly blame the authors of the US constitution for not using game theory, as it didn't exist at the time, but that doesn't make me feel any better about the quality of the constitution.
I doubt we have a page on what mathematicians dislike about the legal system, nor do I think it's an encyclopedic topic, but if you do turn up any authors who have written books on the topic (or related topics) I'd be very interested to hear. Eric. 131.215.158.184 (talk) 19:30, 23 March 2009 (UTC)
I'm a strong supporter of the bicameral legislature, for the simple reason that it makes it more difficult to pass laws. It's still far too easy at that.
I'd be willing to get rid of the Senate in exchange for a countervailing requirement that made legislation equally or more difficult to pass. Or for one that made repeal of laws easier than passing them. --Trovatore (talk) 19:38, 23 March 2009 (UTC)
But that doesn't explain why the two chambers apportion power to the states in completely different ways. The problem is caused by the necessity, when putting together a new system of government, of keeping everyone's level of power pretty much the same as it already is (otherwise you won't get it approved). You are then stuck with that system for a long time, despite the fact that it only really made sense at the time it was made. --Tango (talk) 19:48, 23 March 2009 (UTC)
Hm? But it does make sense now. Oh, it's probably not the optimal solution, but you'll never get that. It's better than a naive unicameral solution, which would make legislation too easy to pass, as it is in the (effectively unicameral) UK. --Trovatore (talk) 20:13, 23 March 2009 (UTC)
How does it make sense for the fair way of apportioning votes to be completely different in each chamber? Does the UK have more pointless legislation than the US? If you want to make it harder to pass laws, just have a unicameral system which requires a supermajority to do anything. --Tango (talk) 20:32, 23 March 2009 (UTC)
Who has more pointless legislation (but honestly it's not pointless legislation I'm worried about, but rather pointed legislation imposed by a minimum winning coalition) I won't venture to say. I think it's a flaw in the UK system that it is too easy for the government in power to impose its will, that's all I'm saying.
A unicameral system with a supermajority might be acceptable; certainly I'd be willing to consider it. But the mere fact that a unicameral system with simple majority is theoretically neater-looking is not sufficient reason for me to support it. --Trovatore (talk) 20:47, 23 March 2009 (UTC)
Certainly the "everyone's power must stay the same" problem is a big deal... I wish I knew a way around it. See DC Voting Rights Act, fourth bullet, for an example... the only way to give the DC a vote (surely Democratic) in the House is to balance it with giving Utah an extra vote (surely Republican)....
As for apportioning power differently in the two houeses, that can be (plausibly) justified. Perhaps there is a reason to make the houses heterogenous with respect to each other, say, so that it is harder to pass legislation as Trovatore advocates. Having both houses apportion power similarly makes legislation easier to pass because it increases the likelihood that the same party is in power of both houses simultaneously. Possibly the benefits of heterogeniety outway the cost of having one of the houses have non-optimal power apportionment. Eric. 131.215.158.184 (talk) 22:25, 23 March 2009 (UTC)
There are a lot of alternatives to bicameral, not just unicameral. We can always just increase the number of houses... is there a particular reason why 3 or 4 is worse than 2? Or better? Sure, at some point the complexity overwhelms the system, but it's hard to say at what point the added complexity balances against any marginal benefit of having more houses (assuming that there is a benefit at all). And the moment that you have more than one house, you have a huge number of choices of the relationship between the houses. One could conceive of a system of major and minor legislation, where either house has the authority to pass minor legislation, but both houses must agree to pass major legislation (mmm... I can see some problems with that). Voting rules can be different within the two houses: one requires supermajority and the other does not. Or a supermajority within one house overrules a non-supermajority within the other. Or only one house can initiate legistlation. Or only one house can ammend existing legislation.
Another arbitrary facet that plays a pivotal role in modern politics: state boundaries. States have their present boundaries in large part due to quirks of US history. But the relative sizes and demographics of the states play a massive role in modern politics and the division of power and control between the states. Although "optimal" is hard to quantify here, it seems quite reasonable to claim that pure chance did not give us the optimal location of state boundaries. Of course, that raises the question of why power should be apportioned into smaller regions at all, or why those reasons should be geographically based, instead of based on ideological differences, wealth differences, educational differences or any number of other relevant features. Eric. 131.215.158.184 (talk) 22:12, 23 March 2009 (UTC)
You mention the difference between theft and grand theft, which isn't as big of a deal, because you're put in front of a judge who is given subjective power. Tax law is a lot less subjective. A good example, something I saw recently, is the new Housing Stimulus package we have in the US. If you are a first time home buyer (haven't owned a home in 3 years) and you make less than $75,000 as a single or $150,000 as a couple, you will get an $8,000 tax credit. Now what about the people making $77,000? They should try and make at least $2,000 less so that they can get $8,000 more. They also are not accounting for regional/cost of living differences. In some parts of the country a salary of 75K will afford you the largest house on the block and in other parts you'll probably have to live in one of the worse neighborhoods to afford anything, and it'd still be small.
While mathematically it could use lots of improvements, it doesn't bother me because I know they are intentionally try to make it simple since tax code is complex enough.Anythingapplied (talk) 20:19, 23 March 2009 (UTC)
You're joking about the tax code aren't you? Firstly the past is a guide to the future so if it has been made complex it will be made more complex. Also the evidence is against you in the uk, it is growing fast and is coming on to about half the size of the current Encyclopaedia Britannica. Dmcq (talk) 18:46, 24 March 2009 (UTC)
The Sorites paradox is possibly the best example of a theft/grand theft situation, and its a problem which have troubled philosophers for centuries. Essentially here you have a function from a continuous and possibly multi-value input to a discreet set and there probably is no ideal solution. For the tax codes again an optimal solution may be hard to find, but a monotonic solution would be an improvement.--Salix (talk): 20:29, 25 March 2009 (UTC)
There are logical laws that indicate that these points of uglyness must be there. See ie. Gödel's incompleteness theorems, which puts strong limits on formal systems. And Arrow's impossibility theorem, which shows the impossibility of a "fair" voting system, and iirc can be adapted to a theorem about resource allocation. Taemyr (talk) 20:43, 23 March 2009 (UTC)
What, if anything, do Gödel's incompleteness theorems have to do with politics or law? Algebraist 21:05, 23 March 2009 (UTC)
Are you alluding to the apportionment paradox? I don't think that follows from Arrow's theorem (it was proven much later). Eric. 131.215.158.184 (talk) 21:54, 23 March 2009 (UTC)
Gödel's results apply to logical systems sufficient for basic arithmetic - has any government ever existed that was capable of doing basic arithmetic? --Tango (talk) 22:10, 23 March 2009 (UTC)
I've actually taken law as one of the prime examples of where the mathematical and scientific styles of thought break down. Since law deals with morality, compromise, pragmatism, and fast decision-making, a theoretical framework is hard to produce. Morality is viewed differently by every person, and some people think it's arbitrary in the first place, but it has to form the foundation of a lot of our legal system. The rest is maintaining the social order. Since nobody agrees on either, we have to take a bizarre middle ground in a lot of cases just to get anything done, and we have to base the decisions made on insufficient evidence and the partially-formed ideas of people with a lot of other work to do. It would be like trying to mathematicize a game of tennis where no player could agree on the rules, but the game had to be decided by lunchtime. It's just not possible. I would see law, in formal terms, as more a giant pile of heuristics. We can't make them consistent without pissing off most of the population (hence compromise), we can't make them optimal without unlimited information and insight, and there's really no point in legislating for things that won't happen. Who cares what the law has to say, if anything, on whether flying pigs should need pilots' licenses? They don't exist! This will not come to court anyway! In a purely logical system, however, that would be just as important a decision as any other - logic provides no way of evaluating the importance of one fact over another. And of course, there are the formal problems mentioned before, such as Arrow's paradox and the Liar's paradox. In the first, it is clear that the unique optimal voting system is formally impossible, and in the second it is clear that there are some questions that can be asked, but not answered, in any reasonably powerful formal system. Black Carrot (talk) 22:29, 23 March 2009 (UTC)
What, if anything, does the liar paradox have to do with politics or law? Algebraist 15:11, 24 March 2009 (UTC)
That's an excellent question. I can't remember what I was thinking when I wrote that. I think my impression was that there is a way to embed the liar paradox or something similar into a legal dispute in such a way that the court would have to make a ruling, but could have no logical basis for doing so, a bit like the Hanging Man paradox. I'm having trouble coming up with an actual example, though. Black Carrot (talk) 19:06, 24 March 2009 (UTC)
Here's a rather unpleasantly contrived example. What if someone, under oath, testified that they were telling a lie? Does that count as perjury? Of course, in the real world they would probably "cut the knot" and find him in contempt of court if they paid attention at all, but it's the best I've got at the moment. Black Carrot (talk) 19:10, 24 March 2009 (UTC)
They would probably be found in contempt to talking about something not relevant to the trial - or just told to get on with answering the questions. --Tango (talk) 15:01, 25 March 2009 (UTC)

[edit] vertex connectivity of a graph

Hello. I want to prove that the following construction of a graph G with n vertices and e edges gives us a graph with maximum possible vertex connectivity which is \left\lfloor\frac{2e}{n}\right\rfloor. First construct a regular graph on n vertices where each vertex has degree \left\lfloor\frac{2e}{n}\right\rfloor. Now add the remaining e-\frac{n}{2}\left\lfloor\frac{2e}{n}\right\rfloor edges arbitrarily. The resultant graph has vertex connectivity \left\lfloor\frac{2e}{n}\right\rfloor.

I can understand that such a graph would have a vertex v of degree \left\lfloor\frac{2e}{n}\right\rfloor and removing the adjacent vertices of v would thus disconnect the graph but I want to prove that removing fewer vertices can never disconnect G. How can I do that please?--Shahab (talk) 16:24, 23 March 2009 (UTC)

As stated, this procedure obviously fails. For example, with n=e=6, one such G is a pair of triangles, which is not even connected. I'm not sure yet if requiring connectivity of the \left\lfloor\frac{2e}{n}\right\rfloor-regular graph gets you anywhere. Algebraist 16:31, 23 March 2009 (UTC)
Update: it doesn't. There are 3-regular connected graphs which are not even 2-connected. Algebraist 16:43, 23 March 2009 (UTC)
There's another problem, too. n and \left\lfloor\frac{2e}{n}\right\rfloor may both be odd, in which case there is no regular graph with that degree. A suggestion for a different construction: form a clique of the desired connectivity, connect every other vertex to each clique vertex, and use up any leftover edges by connecting pairs of non-clique vertices. —David Eppstein (talk) 17:03, 23 March 2009 (UTC)
OK. I see this construction is faulty. I was reading from this book(Pg 78). David, why will removing the vertices in the clique disconnect the graph? Non-clique vertices may be forming a path.--Shahab (talk) 18:20, 23 March 2009 (UTC)
I don't understand the implicit claim in the second sentence of the original post. The complete graph on n=k vertices and e=k(k-1)/2 edges is k-vertex connected and (k-1)-edge connected, right? However, k is quite a bit larger than floor( 2e/n ) =floor( k/2 ) = k-1, right? In particular, K4 has n=4, e=6, k=4 > floor(2e/n) = 3. Is it asking for a graph that somehow manages to NOT be highly connected? JackSchmidt (talk) 18:02, 24 March 2009 (UTC)
2e/n=k-1 in that case. Where I come from (Bollobas's text book agrees with me), the complete graph on k vertices by definition has vertex-connectivity k-1. This slightly perverse definition ensures that the connectivity is always at most the minimal degree, which is of course at most floor(2e/n). Algebraist 18:11, 24 March 2009 (UTC)
Combinatorics is significantly harder for people who cannot handle counting more than 2 thing or doing simple arithmetic. Ok, with your two corrections to my understanding, I think it makes sense. Then David Eppstein's argument shows how to keep the connectivity high while filling in the rest of the nodes and edges. Ok, all good. The answer to the OP's second question is then simply "because it can't have more than the maximum connectivity," right? Someone else should answer it, as I find most of combinatorics baffling. JackSchmidt (talk) 19:07, 24 March 2009 (UTC)

[edit] March 24

[edit] Pronunciation

How does one pronounce the name of Don Zagier? I couldn't figure out from his article whether his mother tongue is German or English. The name would be pronounced [ˈtsaːɡiːɐ̯] in German I guess, but I have no idea how would it go in English if the name is English after all. — Emil J. 11:27, 24 March 2009 (UTC)

I don't know the answer to your question but I suggest that you post in on the Language desk where it is more appropriate. Cheers--Shahab (talk) 12:41, 24 March 2009 (UTC)
I'm well aware of the Language RD, but since answering the question requires factual information about a mathematician, I assumed I'd have better luck at the Math desk. — Emil J. 14:40, 24 March 2009 (UTC)
His mother tongue is German, and the pronouciation you suggest is indeed the right one. That said, he mostly works in France, so he's quite used to hear his name pronouced in all manners of funny ways. Bikasuishin (talk) 12:54, 24 March 2009 (UTC)
Thanks! — Emil J. 14:40, 24 March 2009 (UTC)

[edit] Sylow 2-subgroups of classical groups

I haven't really thought this through much, but could it be that, say, x(a)=\rho^i(p),~x(a)x(b)=\rho^{i+j}(p)? I'm still a bit confused by your notion of parametrising group elements by this field... (I tried looking up stuff about the Steinberg presentation this morning in our library, but found nothing useful, sadly) 79.73.251.120 (talk) 14:33, 25 March 2009 (UTC)

That is how I want it to work, but I'm having a heck of a time making it make sense in odd or zero characteristic (and it should be universal), and I'm worried about some little details I assumed were easy. The problem is basically defining "z" and trying to prove y(b)*x(a) = x(a)*y(b)*z(ab) in the abstract group P (in SU3 or SL3 it is easy).
The x(a)*x(b) thing is actually trickier. x(a) = ρi(p) where a = ζi in K, so we actually need Zech logarithms to write a + b = ζk for some k = k(i,j). Even worse is when a = −b, then we are trying to solve ζk = 0, oops. I can define x,y,z naturally so that [ y(a), x(a) ] = z(a^2) in the abstract group P, but I've had no luck showing more generally that [ y(b), x(a) ] = z(ab). Also my "natural" definition of z(a) only specifies z(a) for squares a (because it is roughly equivalent to defining z(a^2) = [ y(a), x(a) ]), and in characteristic not two (or imperfect char 2), squares might not be the whole field.
The Steinberg presentation is covered in most books on finite simple groups of Lie type. For the maximal unipotent subgroup it is also just called "Chevalley's commutator formula". Steinberg's Lectures on Chevalley Groups and Carter's Simple Groups of Lie Type are the books I used as I learned it. I found it very difficult to learn, and required very patient instruction. JackSchmidt (talk) 16:53, 25 March 2009 (UTC)

[edit] ALGORITHM carl leffler

I seek information about Carl Leffler, a mathematics professor circa 1930-1931, who created the Bingo cards now in use. The mathematical tool used was algorithm. He did this "manually", this being before computers

He is said to have been affiliated with Columbia University in New York City

All pertinent items desired: Professional papers, anecdotes, mathematics institutional affiliations. In short, anything that will permit a fuller image of one who, singly, successfully completed work requiring enormous mental ability.

I am the Bingo caller at a Salvation Army Independent Living facility in New York City. The information requested is for a Bingo research project

Thanks Center39 (talk) 19:15, 24 March 2009 (UTC)

You might try contacting the Columbia archives department [2]. You can also try calling random Lefflers in the phonebook in case you discover a relative. By the way, everyone uses algorithms, that does not not narrow it down. McKay (talk) 23:40, 24 March 2009 (UTC)

[edit] March 25

[edit] Using two results to generate a "combined" result

Hi - I have been asked a question I can't quite get my head round....
Each day at work I report two results, A and B. A is calculated using the formula A = L/R, and B is calculated using B = D/(R+T). I have now been asked to report these as a "combined result". This seems a little nonsensical to me (believe me, it is), but in order to keep the powers that be happy I need to report this new result. The question is: how do I combine the results in a way that makes some mathematical sense? Thanks for your help! sparkl!sm hey! 09:22, 25 March 2009 (UTC)

You'll have to tell us a good deal more about what's going on and what everything means to get a useful answer. Algebraist 09:31, 25 March 2009 (UTC)
OK, I tried to keep it simple, but I will attempt to explain.
The results I report relate loading cages of stock onto lorries in a depot. Result A is the percentage of cages that did not get loaded onto a lorry, where L represents the number of cages that did not get loaded (lost), and R represents the total number of cages delivered to store (retail). Result B is the percentage of cages that were deleted from the depot computer system before they reached the loading bays, where D = no. of deleted cages, R again represents the total number of cages delivered to store, and T represents the number of cages delivered elsewhere (trunk). Is there a sensible way to combine the numbers? Thanks again! sparkl!sm hey! 10:08, 25 March 2009 (UTC)
A number of ideas:
1. Figure out a cost for each and report a total cost.
2. Report the maximum of the two
3. Toss a coin and report one depending on heads or tails
4. Report a random number and see if they notice
5. Report the exact same number every day and see if they notice.
6. Report 2L.3R.5D.7T this loses absolutely no information.
7. Alternatively just interlace the decimal digits of A and B - this is easy to reverse back to A and B. Dmcq (talk) 12:40, 25 March 2009 (UTC)
Thanks, I think I'll go with number 4 ;) sparkl!sm hey! 13:34, 25 March 2009 (UTC)

Easy! Consider A to be the length of the X-axis and B to be the length of the Y-axis.

Report back C=Sqrt(A^2 + B^2). 122.107.207.98 (talk) 11:46, 26 March 2009 (UTC)

[edit] Necklace with pendante catenary shape

Hi,

I can work through the derivation for the shape of a catenary with no problem. However, what I want to do is find the shape for a catenary with an additional downwards force in the centre - e.g. like a necklace with a pendant. Would the chain on one side of the pendant still follow a catenery shape? I thought it would, with boundary conditions given by the length of the chain and the additional force, but I can't seem to adjust the parameters in the standard catenery expression to give an asymettric shape.

I am ultimately working towards 'inverting' this to find the optimum shape for an arch bridge which supports its self-weight plus a point load in the centre. Any help greatly appreciated! LHMike (talk) 10:48, 25 March 2009 (UTC)

Yes it'll be two parts of a catenary stuck together. Each little bit of the chain only knows whats beside it so the equations at any point except the centre are exactly the same. For any point in the catenary there is a force along the catenary supporting the centre bit. The upward component of that force will be supporting the weight of the catenary from that point to the bottom so is proportional to half the length of the catenary at that height. So you need to find the length of catenary that equals the weight of your pendant, chop that part out of the catenary and join the two halves together. By the way Gaudi is reputed to have designed the Sagrada Família by hanging weights to nets that way. Dmcq (talk) 12:21, 25 March 2009 (UTC)
Ah! Yes, chop out the length equal to the weight of the pendant. Fantastic, thanks, now to Maple! LHMike (talk) 14:20, 25 March 2009 (UTC)

[edit] Show xsin(1/x) is integrable rigorously?

Hi there guys - pretty much wondering what the title says - how would one show rigorously that xsin\frac{1}{x} with f(0)=0 is integrable? I've been working with Riemann integrals but I'm not following the theory brilliantly so if anyone could give me a little help on this that'd be great. I have that a function is Riemann integrable if its upper and lower integrals are equal, or alternatively if for any ε there exists some dissection D with U(f,D) − L(f,D) < ε, but I don't really know how to start proving that in this example case...

Thanks for any help,

Otherlobby17 (talk)Otherlobby17 —Preceding undated comment added 10:49, 25 March 2009 (UTC).

Here is a hint: for a given t, think of the region of integration in two parts, the part where |x| >= t and the part where |x|<t. How does the integral behave in each of the two parts? How large is the contribution of the "messy" part to the total, if t is very small? 75.62.6.87 (talk) 11:05, 25 March 2009 (UTC)
It's a continuous function, what's the matter? On an interval [a,b] you always have \scriptstyle U(f,D)-L(f,D)\leq(b-a)\omega(|D|), where ω is a modulus of continuity of f and |D| is the maximum distance between consecutive points of D (the so called "modulus" or "norm" of the "partition" D). Here f is 1/2 Hoelder continuous, that is, it has a modulus of continuity of the form ω(t): = Ct1 / 2. --PMajer (talk) 13:33, 25 March 2009 (UTC)
This approach easily shows that all continuous functions are Riemann/Darboux integrable, once one has the Heine–Cantor theorem that continuous functions on closed bounded intervals are uniformly continuous. Algebraist 14:03, 25 March 2009 (UTC)
So let me add that if you are still "not following the theory brilliantly", the example of your f(x) is a nice exercise to understand better the general facts mentioned by Algebraist. Notice that \scriptstyle U(f,D)-L(f,D)\leq (b-a)\omega(|D|) is quite a simple and elementary inequality, and that you can compute explicitly the constant C in the modulus of continuity of your f (you need the Heine-Cantor theorem only in the generality of the above mentioned statement about all continuous function, of course).--pma 16:02, 25 March 2009 (UTC)

A function is Riemann-integrable on a bounded interval if and only if it's continuous almost everywhere in the interval. The function you give is continuous. Even if the concept of "almost everywhere" has not been mentioned (as it might well not be, when the Riemann integral is treated) it would probably be mentioned that if a function is everywhere continuous then it's Riemann integrable. Michael Hardy (talk) 02:16, 26 March 2009 (UTC)

You forgot the boundedness condition: f from [a,b] to R is Riemann-integrable iff f is bounded and continuous almost everywhere. Algebraist 02:29, 26 March 2009 (UTC)
In this case, it doesn't matter, though - a continuous function on a closed interval is always bounded, so Michael's last statement (the important one) is accurate (as long as you take "interval" to mean "closed interval", anyway). --Tango (talk) 05:04, 26 March 2009 (UTC)

[edit] unsolved math mysteries?

I'm wondering if anyone could name any well-known unsolved math mysteries. I don't mean conjectures like Goldbach's (i.e. propositions which most mathematicians think are true, but are missing proofs, even though some of these eventually turn out to be false). By a mystery, I mean something that is knowable and interesting but there is significant uncertainty about what the answer will actually be when it is found. Thanks for any examples. 75.62.6.87 (talk) 11:00, 25 March 2009 (UTC)

Unsolved problems in mathematics is the place to be. Livewireo (talk) 14:45, 25 March 2009 (UTC)
Unfortunately, most of those articles don't indicate whether they're things that everyone believes (like Goldbach's) or things that there's uncertainty about. I believe an example of the latter is the invariant subspace problem for Hilbert spaces. Algebraist 14:50, 25 March 2009 (UTC)
Whether the Euler–Mascheroni constant is irrational is a good one, probably? SetaLyas (talk) 15:22, 25 March 2009 (UTC)
Well, there's the problem. As far as I know, everyone thinks it must be irrational (and indeed transcendental), but Wikipedia articles don't tend to give information on this kind of thing, and it might be hard to find reliable sources. Algebraist 15:26, 25 March 2009 (UTC)
I would add the continuum hypothesis, though this does require you to liberalize your notion of what it means to "find" an answer. I think it's fair to say that, among set theorists who think it has a well-defined answer, the majority think it's false. But there is a significant countercurrent represented by Matthew Foreman. --Trovatore (talk) 19:46, 25 March 2009 (UTC)
Would you mind taking a stab at what proportion of set theorists do think CH has a well-defined truth value? Algebraist 20:33, 25 March 2009 (UTC)
Hmm, that's difficult. Mostly they don't like to commit themselves on the point in public. It's fairly clear that Woodin thinks so, and Moschovakis; beyond that I can't really say. --Trovatore (talk) 20:38, 25 March 2009 (UTC)
Thanks. Algebraist 23:27, 25 March 2009 (UTC)
That's an unsolved problem in the philosophy of mathematics, rather than mathematics itself, really, isn't it? --Tango (talk) 23:41, 25 March 2009 (UTC)
Actually, no, I'd call it an unsoved problem in mathematics, but one whose solution requires doing some philosophy. Some mathematicians like to pretend that philosophy of mathematics is an add-on, something you don't really have to worry about if you don't want to. In my view that's a superficial approach. --Trovatore (talk) 00:06, 26 March 2009 (UTC)
Yes... I've always noticed this mysterious reserve of set theorists. Maybe they know something? Sometimes I'd almost say they like to show they don't like to commit themselves. It is strange though... --pma (talk) 23:42, 25 March 2009 (UTC)

[edit] Independence number of 2^n as a graph.

I became curious about this: what is the maximum cardinality of a subset S of the set \mathbf{2}^n of all binary strings of length n, such that for all x and y in S one has \scriptstyle(x_2,x_3,..,x_n) \neq (y_1,y_2,..,y_{n-1})? We may formulate it as a graph problem: if Gn is the graph with vertex set \mathbf{2}^n and (x,y) is an edge iff \scriptstyle(x_2,x_3,..,x_n)=(y_1,y_2,..,y_{n-1}), what is the independence number α(Gn)? I'd be happy even with asymptotics. I made some hand computations for small cases, but of course the complexity grows very fast. I think something should be known though. But what's the name of these graphs? I'd say "shift graphs", but seems that the name has already another use. This File:Shiftgraphs.pdf has a group picture of the graphs for n=0,..5. --pma 15:37, 25 March 2009 (UTC)

Have you tried searching for your small-case results on the OEIS? Algebraist 15:42, 25 March 2009 (UTC)
Well, they are too less, just the first 4 or 5, and now I am not even sure about them; but you are definitely right that this is one thing to do, after computing some more --pma 19:17, 25 March 2009 (UTC)

[edit] Set theory paradox

Is the cardinality of the set containing exactly the finite natural numbers finite or infinite? Lucas Brown (talk) 18:35, 25 March 2009 (UTC)

Infinite. Where's the paradox? --Trovatore (talk) 18:45, 25 March 2009 (UTC)

...and similarly the set of all countable ordinals is uncountable. I don't think things like that are generally considered paradoxical. Michael Hardy (talk) 02:10, 26 March 2009 (UTC)

[edit] Random number generation

You see the function on calculators to produce a random number, how is this generated? Surely there must be a logarithm to work out which number to select, which in itself means the number would not be random. —Cyclonenim (talk · contribs · email) 20:53, 25 March 2009 (UTC)

I think you mean algorithm, not logarithm. See the article Pseudo-random number generator. Aenar (talk) 21:02, 25 March 2009 (UTC)

[edit] March 26

[edit] winding

Do we have an article on the problem of the size of wound-up like cable or toilet paper? You know, the more you wind the greater the circumference so the more it takes to increase the diameter.. .froth. (talk) 05:18, 26 March 2009 (UTC)

To a very close approximation ignoring that the paper isn't a exact circle or that the thickness may depend on the curvature the length will be proportional to the area. In a sort of way putting thin sheets together was the start of integral calculus, here each thin sheet round in a circle has length 2πr and integrating gives the area. Dmcq (talk) 08:49, 26 March 2009 (UTC)
See Clackson scroll formula (WHAAOE). Gandalf61 (talk) 10:47, 26 March 2009 (UTC)
Personal tools