User:Bob K

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Contents

[edit] Finding subpages

Is there any way to list all the subpages of a specific page?

To find all subpages of Manifold, for instance, go to Special:Allpages and type "manifold/" in the box labelled "Display all pages starting with:" (I learned this trick from R. Koot) -- Jitse Niesen

[edit] Moved images.

[1]

I also went there and changed PD to PD-self, but I didn't realize that I was not logged in, because I wasn't a member of commons. It seemed to work anyway. Is that true? And did I need to do that? If so, shouldn't someone have notified me?... like maybe the person who moved the image? Is is going to stay in one place now? Should I have uploaded it differently than I did? -- Bob K
Since the image contains English text not suitable to http://simple.wikipedia.org/ I think User:Tiaguito's move to commons was wrong, and your original upload was fine. It's not a problem since it still shows up fine in the articles. -- Jeandré, 2006-01-11t11:00z


[edit] example

X(k) =  \left. X(e^{i \omega}) \, \right|_{\omega = 2 \pi \frac{k}{N}}
= \left. \sum_{n=0}^{N-1} x[n] \,e^{-i \omega n} \, \right|_{\omega = 2 \pi \frac{k}{N}} =

\sum_{n=0}^{N-1} x[n] \,e^{-i 2 \pi \frac{k n}{N}}

[edit] two articles for discrete fourier transform ?

Hi Bob. Thanks for your suggestion to create two articles. Stevenj however immediately removed every use of the notation  1^{1/N} for a primitive N-th root of unity, which is needed in order to avoid the trancendentals e and π and in order to simplify the notation.

 1^{1/N}=e^{2\pi i / N}\,

StevenJ has no respect for WP:DR. So I cannot have a second article in peace.

The idea is to write an N-periodic sequence an as a linear combination of the N-periodic sequences of powers of the N-th roots of unity.

 a_n = b_1\cdot 1^{1\cdot n/N}+b_2\cdot 1^{2\cdot n/N}+...+b_N\cdot 1^{N\cdot n/N}

Bo Jacoby 07:07, 29 July 2007 (UTC).

Hi Bob. I edited root of unity, but StevenJ opposes it. Please see the history page and the talk page. StevenJ has a know-all attitude, and insists that the articles be technical and confusing. I find it impossible to work under these conditions. I request your advice. Bo Jacoby 18:38, 7 August 2007 (UTC).

Hi Bo. My best advice is to forget about it. But if you are determined to persevere, the advice you really need is how to invoke the "official" Wikipedia dispute resolution procedure. Or you can probably just go read all about it somewhere. But I advise you to keep in mind that the pay here really stinks. It's not worth the trouble, IMO. If someone wants to shout louder than me, I just let them have their way. My ego can stand it. So anyhow, I am not the one to advise you on dispute resolution. If you do go down that path, you of course have to be prepared to accept the outcome.
Regarding my suggestion to create a new article, it appears you haven't actually tried that. If you do, try not to appear redundant; i.e. don't call it something like Discrete Fourier Transform 2. Look for a new angle to justify a new article. And I guess it has to be non-original. That is where you historically run into trouble, as I recall. But, as always, there is no guarantee of success. That's just part of the deal at Wikipedia. If you can't work under those conditions, you won't be the first to feel that way.
--Bob K 23:51, 7 August 2007 (UTC)

Hi Bob. Thanks for your answer. Last year I did forget about it, until also somebody else called StevenJ's writing technical and confusing. My new short subsection of root of unity is basicly the new article that you suggested, but StevenJ deleted it, commenting that the information is already in the old (confusing) article. This violates the advice of WP:DR. Do you agree on that? The consequence is that the confusing article cannot be improved. This is not my problem alone, but also yours. However, as I read your answer, you do not intend to support me or to go into a dispute with StevenJ (?) and so I am alone, and so I will forget about it following your best advice. Have a nice day. Bo Jacoby 05:31, 8 August 2007 (UTC).

  • Sorry, I didn't realize that WP:DR is the dispute resolution guidance. So that was unnecessary advice. Whether or not Steven violated the guidance... well I would have to read everything you guys wrote, and I would have to read the guidance, and frankly it's just not my thing. It's not what I visit here for. Enough of what I contribute seems to survive to keep me interested. When that is no longer the case, I expect I will simply lose interest, rather than engage in long arguments about rules.
  • I should probably clarify that the non-original "rule" is not something that I care about. If I were the rule czar, I would say that original stuff is fine provided it is in a standalone article and clearly marked as original, so those who don't want to read it don't waste their time. Others will probably read it just because it is original. But I also think I understand the reasoning behind the rule. They feel they have to draw a line somewhere, and maybe they are wiser than I. They certainly care more than I do.
  • I am not encouraging you to attempt dispute resolution, but I will say that I don't have my mind already made up. I simply have not looked at all the edits in question. root of unity is not a big interest of mine. Similarly, I got invited into a debate over zero-order hold awhile ago, and I kept wondering why I was doing that. I never even read the article before that.
I think you have made the wiser choice.
--Bob K 13:02, 8 August 2007 (UTC)

[edit] original research

Thanks a lot. I do not know in advance whether an article of mine is original research. When I finally understand something after reading and thinking and experimenting, I may have been the first person in the world to have grasped it, but usually I am not. For example I found methods for computing complex roots of polynomials, and for data modelling and database lookup. I had programmed these methods and tested and used them for years, and I found them competitively efficient and simple and fast. None of them seemed to be described in wikipedia or elsewhere, and so I wrote articles. Fellow editors observed that the method for computing roots is known in the litterature, and so the article changed title to Durand-Kerner method. Fellow editors also observed that the method for data modelling and database lookup did not seem to be known in the litterature, and so the article ordinal fraction was removed from wikipedia. To my fellow editors this is a win-win situation: either they have the triumph of removing my name, or the have the triumph of removing my article. To me it is also a win-win situation: either I have my contribution approved, or I have my claim of originality approved. So this is a win-win-win-win situation. The only drawback is that some editors get provoked and upset and hostile. My trivial observation that the pair of transforms,

x j = n−1/2·Σ k X k*·z j·k
X j = n−1/2·Σ k x k*·z j·k

(where z is a primitive nth root of unity, and j and k each take n consecutive integer values, and * indicate complex conjugate), is perfectly symmetric, now seem to be original research, much to my surprise. I programmed a fast fourier transform based on this, and I never again confused the fourier transform with the inverse fourier transform because they are the same. How nice! Anybody interested? Apparently not. My observation that the statistical formulas for mean and standard deviation of the induction likelihood distribution, IF(N,n,i), follows from the corresponding formulas, if(N,n,I), for hypergeometric distribution by the transformation (N,n,I,i) → (−2−n,−2−N,−1−i,−1−I), (where N and n are the number of items and I and i are the number of defectives, in the population, N and I, and the sample, n and i), was removed from wikipedia as original research. It is an immensely useful formula, and all the statisticians of the world make computational mistakes when they use approximate methods for statistical inference, because they do not know it. Generally I like to cooperate with fellow wikipedia editors. StevenJ is an unpleasant exception. Bo Jacoby 16:59, 8 August 2007 (UTC).

I know exactly what you mean about "original research". I have been confronted with many little "problems" over the course of a 40-year career in applied engineering. Unlike others around me, my first instinct is to look inward, not outward, for the solution. A telling fact is that the literature survey section of my thesis was the weakest part. My advisor had to force me to strengthen it, after my own research was completed. I didn't enjoy publishing, and I don't enjoy reading tech journals.
I have undoubtedly re-invented a few wheels, but I don't care about that. It's satisfying to solve a problem myself, even if I am not the first. Admittedly, I haven't always come up with the best solution. But I always get insight that I wouldn't have gotten by "copying" someone else's answer. I think that justifies my approach in the long run. I have occasionally discovered things that I suspect are unique/original. But I never cared enough to try to find out. Besides, I have always worked in a competitive industry. If I discovered something that gives us an edge, why would I want to blab it to the whole world?
--Bob K 15:16, 9 August 2007 (UTC)

Hi Bob. I got my first job as a programmer in 1969, so I do not have quite the same amount of experience as you do. I must congratulate you of your discoveries that gave your company a competitional edge. In the mixture of cooperation and competition within and amongst companies I found it difficult to apply my discoveries in cooperation with others, so I worked out my solutions alone. I do have a few disciples, however. The number, n, of disciples makes the difference between a fool and a prophet. (n=0) <=> (it's a fool). Creativity and Planning are complementary qualities in a company; new ideas never fit into the plan, so planning kills creativity, although it was not planned that way. Creativity may lead to savings. Offer 5% savings and your client becomes happy. Offer 10% savings and your client gets suspicius. Offer 50% savings and you client gets offended. Offer 90% savings and your client gets furious. Some companies officially welcome creativity, but they are not sincere. So the value of creativity is simply that it is fun. Have a nice day. Bo Jacoby 08:00, 10 August 2007 (UTC).

[edit] good old days

Sounds like we're exactly the same age. I did indeed start programming in 1967. But it was just a summer job, as an undergrad. I graduated in 1969. The computer was a "7700", one of two prototypes discarded by IBM and donated to universities. It had neither a compiler nor an assembler. A couple of grad students were supposedly working on those projects, but I never saw them in my two summers there. And I wasn't even sure what those words meant. My actual introduction to FORTRAN didn't come until junior year, after the first summer. By that time I was proficiently writing programs for basic data analysis, in octal, on punched cards. Looking back, I realize that there must have been a way to single-step through programs, but nobody told me about that concept. I was effectively running in batch mode, although only a handful of people actually used that machine. Anyhow, I learned to be very very careful, because it was easier than trying to figure out what went wrong later. FORTRAN just made it even harder to goof up. So for a long time, I thought that programs should compile and run error-free the first time. And mine usually did, which amazed many people over the years. I still hear stories about it when the old-timers get together.
--Bob K 12:26, 10 August 2007 (UTC)

Oh those good old days! I guess that I am a couple of years younger than you? (I was born in july 1946). My first computer was the GIER which I experienced at the university. That was fascinating. I joined Regnecentralen in 1969. As my first assignment I coded a procedure for computing the Euler Beta function in ALGOL 60 for the RC4000 computer. We used paper tape rather than punched cards and the ascii alphabet rather that the ebcdic, otherwise I share your experience regarding the benefits of careful and error-free programming, but I made many errors before I reached that level of skill. Programming was done by paper and pencil, and eraser!, and the program was punched using a flexowriter. Then the paper tapes were read by a paper tape reader into the computer, and the output printed on a line printer for further study. Later I learned to code in assembler for the RC4000 and for the NOVA1200 computers. But my invention of ordinal fractions was not until 1980 when I made general-purpose administrative software. It is better than present database management systems, but it is far too simple. Bo Jacoby 17:50, 10 August 2007 (UTC).

You are older by 10 months. I feel very fortunate to have been able to experience the good old days. Productivity was lower, but so were expectations. I skipped ALGOL, PASCAL, ADA, and probably a few others I can't remember. Did several assembly languages though, the most challenging being a pipelined array processor (MAP300). Finally discovered C, the perfect language for me. Have managed to avoid the OO bandwagon, C++, and JAVA. When I find something I like, it takes an act of Congress to take it away. Watermelon and grilled hambergers are still my favorite foods. Now it's getting hard to find an old fashioned watermelon (large, with seeds), so I am trying to grow my own.
--Bob K 19:59, 10 August 2007 (UTC)

[edit] Exponentiation

Hi Bob. Take a look at talk:exponentiation. The account of definition compatibility in Exponentiation#Powers_of_e has been vandalised. Your opinion is welcome. Have a nice day. Bo Jacoby 09:07, 11 August 2007 (UTC).

[edit] Simulating red noise

Dear Bob: I recently read and used the Window Function page to implement most of them in my PeriodogramMaker. Today, I read through the related discussion, and concluded that you must be the primary author and custodian of that page. For this reason, I thought you might be able to help me with a problem I am working on, and think I am the first one to have noticed it, but maybe not, as you noted above.

I have been working on issues related to detecting weak periodic signals in red noise for several years. There are two separate issues: detecting weak signals and modelling red noise. I believe that the most powerful test for periodic signals is the unbinned modified Rayleigh test (Orford 1996, [2]). For weak signals, it is particularly important to oversample the periodogram in order to detect those peaks that may be between two independent frequencies. I also formulated a version of this test for binned time series. Recently, doing some experimenting on leakage functions, I found out that by padding the data with zeros in order to reduce the frequency step size, I get a close to identical periodogram with the FFT as with the modified Rayleigh test, but in a fraction of the time, of course. The only thing about using the FFT is that I have to crop the peridogram in order to display only the frequency range that is physically meaningful, but it's really not a big deal. Any comments on this?

Oversampling the periodogram addresses the issue of scalloping loss. Don't know what "binned times series" means.
--Bob K (talk) 14:49, 19 December 2010 (UTC)
What is scalloping loss? By "oversampling" I mean that we test more than one frequency within each independent Fourier spacing. I use "time series" to designate either a series of arrival times or a light curve. "binned time series" is therefore a light curve because we do not have information about individual events, just a number of events per time bin.
GBelanger (talk) 15:47, 9 January 2011 (UTC)
Scalloping loss is described at this link. The figure that goes along with it was produced by zero-padding. Pay particular attention to the labels on the frequency axis.
--Bob K (talk) 18:24, 14 January 2011 (UTC)

Now to my real problem. I have surveyed the different techniques that are employed to generate red noise (f -a, with a >= 0, where a=0 is white noise), and have chosen to use the one that seems to be the best for my purposes. The algorithm is that of Timmer & Konig (1995, [3]). It involves generating the Fourier components according to the power-law with specified spectral index, randomizing them about the true power spectrum, and then taking the inverse transform in order to end up with a pseudo-random red noise time-domain signal. The time-domain is then degraded in some way, more or less depending on the count rate. My novel approach to this step is to construct a CDF from the time-domain signal, and then use the standard inversion Monte Carlo technique to randomly pick a given number of event arrival times, defined by the count rate and duration.

Not sure what you mean by "count rate". No clue what the last sentence means. Otherwise, I think I followed you. Your initial "spectrum" that you created out of thin air has no "leakage". Each individual component is a delta function, and if you IFT'd it by itself, you would see a rectangularly-windowed sinusoid.
--Bob K (talk) 14:49, 19 December 2010 (UTC)
The "count rate" is the number of events per unit time. Each component of the power spectrum is defined to follow the power-law with the specified index. The random scatter around the power-law at a given frequency is proportional to the power at that frequency. And since we define the power at each frequency, there is no leakage in the initial power-spectrum. The last sentence means that I get my arrival times by constructing a cumulative distribution function, a cumulative histogram, from the time-domain signal. Then I draw random numbers between 0 and 1 (the ordinates), and using the cumulative distribution function, get the corresponding arrival times on the time axis. This is the standard MC inversion technique.
GBelanger (talk) 15:47, 9 January 2011 (UTC)

The authors and everyone who has up to now used their algorithm (about 100 refereed publications), have all assumed that the frequencies used in generating the Fourier components define the natural time scale for the time-domain signal. For instance, if you want to generate a T=105 s time series, then you will define the frequencies from fmin=10-5 Hz up to the Nyquist frequency in steps of df=10-5 Hz. But I recently realized that, in fact, the time-domain signal resulting from the IFT is truly arbitrary both in y and in x. The y-axis is easy to rescale by multiplication (the mean of the output is 1), which does not modify the fractional variance that must be preserved. But for the x-axis, my claim is that it does not have a natural scale.

Whatever it is that's going on, it makes no sense (to me) to describe it as "arbitrary" scaling. I have never seen anything from an IFT that I would describe that way.
--Bob K (talk) 14:49, 19 December 2010 (UTC)
Can you please expand on this. How do we go about defining the time scale of the time-domain signal resulting from the IFT of an simulated spectrum? Staring from a detected time-domain signal, we have 100% of the information because we have the distribution of events in time, and if we take the FT of the time-domain signal we get the distribution of powers in frequency space. But, starting from the distribution of powers, it's like we only have 50% of the information, because we only have the power distribution in frequency space, and do not have the details of the distribution of the events in time. This is where I think the problem lies. But, please clarify this for me if you see it differently.
GBelanger (talk) 15:47, 9 January 2011 (UTC)
The formula:
X_k = \sum_{n=0}^{N-1} x_n\cdot e^{-\frac{j 2 \pi}{N} k n}
is equivalent to this:
X\left(\frac{k}{NT}\right) = \sum_{n=0}^{N-1} x(nT)\cdot e^{-j 2 \pi \left(\frac{k}{NT}\right)(nT)},
where T is the time-sample interval ("seconds"), 1/T is the sample rate (samples per second), N is the number of samples, NT is the duration of the waveform sampled, and k is a spectral index, corresponding to actual frequency k/(NT). AND PLEASE NOTE THAT A GIVEN FREQUENCY DOES NOT HAVE THE SAME INDEX FOR TWO DIFFERENT VALUES OF N.
Similarly, the IDFT formula:
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k\cdot e^{\frac{j 2\pi}{N} k n}
is equivalent to:
x(nT) = \frac{1}{N} \sum_{k=0}^{N-1} X\left(\frac{k}{NT}\right)\cdot e^{j 2\pi \left(\frac{k}{NT}\right)(nT)}
where n is a time index, corresponding to nT "seconds" on the time axis.
--Bob K (talk) 18:13, 14 January 2011 (UTC)

This is easy to see in at least two ways. If we generate two time series with the same spectral index and mean count rate as described above, one on the basis of T=103 s and the other on the basis of T=105 s, and then compare their periodograms, they do not fall one on top of the other as real data do, and as they should. They are shifted on the frequency axis one with respect to the other.

"Same spectral index" means we're now talking about simple sinusoids with the same frequency, each created by IFT'ing a delta function. Right? And the IFTs are different sizes, because the resolutions are different and the Nyquist frequencies are identical. Right? This should produce rectangularly-windowed sinusoids of different durations. Since the IFTs are different sizes, sinusoids with the same frequency do not have the "same spectral index". If you really meant that, then that's your problem. But if my interpretations are correct so far, I think the devil is in the details of the periodogram, which aren't given here. With simple sinusoids, I would indeed be surprised to see a freq shift (greater than ½ bin). But with a distribution proportional to f -a, I would expect to see more leakage from bin n into bin n+1 (for all values of n) than vice-versa, causing an apparent shift to the right.
--Bob K (talk) 14:49, 19 December 2010 (UTC)
The "spectral index" refers to the index of the power-law of the underlying power spectrum. I am not using simple sinusoids, nor am I using delta functions. But maybe I just don't understanding what you are describing. I just use a simple power-law of the form f(x) = x^-alpha, where alpha is the spectral index, and x is the frequency. This is the underlying power spectrum to which a random scatter is added. The time-domain signal results from the IFT of this set of Fourier components. Here is the actual code that generates the components:
// Construct Fourier components
double alpha = index;
Normal gaussian = new Normal(0, 1, engine);
double gauss1 = 0;
double gauss2 = 0;
double spec = 0;
double re, im = 0;
Complex[] complex = new Complex[2*omegas.length];
for ( int i=0; i < omegas.length; i++ ) {
spec = Math.pow(omegas[i], -alpha);
// Positive frequency
gauss1 = gaussian.nextDouble();
gauss2 = gaussian.nextDouble();
re = gauss1*Math.sqrt(0.5*spec);
im = gauss2*Math.sqrt(0.5*spec);
complex[i] = new Complex(re, im);
// Negative frequency
gauss1 = gaussian.nextDouble();
gauss2 = gaussian.nextDouble();
re = gauss1*Math.sqrt(0.5*spec);
im = gauss2*Math.sqrt(0.5*spec);
complex[complex.length-1-i] = new Complex(re, im);
// Timmer & Konig take the conjugate of the positive freq component
//complex[complex.length-1 -i] = complex[i].conjugate();
}
I look forward to your next response.
GBelanger (talk) 15:47, 9 January 2011 (UTC)
When you perform an IDFT, the result is limited to a summation of a finite number of pure, harmonically-related sinusoids ("components"). (Just look at the IDFT formula.) complex[i] is the amplitude and phase of the ith one. Actual "noise" is a summation of an infinite number of non-harmonically related sinusoids. Its Fourier transform is a continuous function of frequency, not the discrete-frequency function that you are synthesizing. You cannot create realistic noise from a finite number of components. When you properly sample bandlimited noise, no information is lost, but the DTFT is still a continuous function. When you sample the DTFT (so that you can do an IDFT), something is lost, and all you can get back is a finite number of harmonically-related sinusoids. Similar to what you started with, but not identical.
--Bob K (talk) 17:41, 14 January 2011 (UTC)
Perhaps that's not quite the right perspective for this discussion. A different one is that when you start with a true noise waveform and select a finite interval (windowing), that continuous (non-sampled) waveform is exactly representable in that interval by a Fourier series, comprising an infinite number of harmonically-related sinusoids. When you do an IDFT, you are essentially truncating that infinite series, so you cannot recreate the original noise you started with. Similarly, if you sample the windowed waveform, you introduce a loss of fidelity, due to aliasing, because the widowed waveform is not bandlimited.
--Bob K (talk) 20:39, 17 January 2011 (UTC)

Another way to see this problem is to generate Fourier components on the basis of T=105 s, and map the resulting time-domain signal onto T=103 s. Once again, the resulting periodogram will be different than the one for which the components were generated on the basis of T=103 s and mapped onto T=103 s.

The first exercise shows what we can call the time mapping effect, which can be understood as just stretching more or less the same time-domain signal in order to map it onto a certain duration. And the second exercise shows what we can call the frequency content effect, which just means that there can be more or less frequencies present in the signal no matter what its "duration" is.

The reason why we know that this is indeed a problem is that if you take a real data set for an observation of say T=105 s and then pick any segment of it with duration 103 s, the periodograms of these two will fall more or less one on top of the other. The difference is that the periodogram of the full data set will extend down to lower frequencies. Therefore, this is exactly how simulated data should do also, but as I explained above it does not. To do this, I think that it is necessary to "calibrate" the simulations using real data.

But, this is where I need your help and opinion. Firstly, what do you think of all this? Secondly, do you think that there is a way to know or determine a priori what is the natural time scale for the time-domain signal resulting from the IFT of the generated Fourier components? And thirdly, do you have a suggestion on how to perform this "calibration" between real data and the simulations? Thanks.

GBelanger (talk) 11:03, 16 December 2010 (UTC)

Not sure if I can be any any help here, because you have given it so much thought already and are still stuck. If you were more of a beginner, my hunch would be that I could answer your question once I finally understood it, which I currently don't after just one quick read-through. But under the circumstances, my hunch is that I will get stuck at the same point as you. Anyhow, I've got to go now. I might come back and re-read later, but no promises. If you provide even more explanation/example, that might help. But you have already done a lot. I don't want to encourage you to waste time on me that might be better spent elsewhere. Of course, explaining a problem to someone else is often just the thing to end up answering your own question. Good luck.
--Bob K (talk) 14:30, 16 December 2010 (UTC)

Since I have nothing to lose, I opt for the option of more explanations, and hope that you will be willing to engage in this process with me. But, I would like to go step by step, striving for maximum clarity. So, do you agree that starting from the Fourier components and applying an IFT to them results in a corresponding time-domain signal whose scales are arbitrary both in x and y?

GBelanger (talk) 15:36, 16 December 2010 (UTC)

[edit] Simulating red noise II

Good afternoon Bob:

I decided to start a splinter section to continue the discussion, but make it easier to follow. Let me take up the last elements you brought up.

The formulas you wrote for the DFT and IDFT can be rewritten as follows, using dt instead of T to make it clear (at least to me) that this is the sampling interval or bin time, and not the total duration Ndt, that is now written as T.

The DFT:

X\left(\frac{k}{T}\right) = \sum_{n=0}^{N-1} x(ndt)\cdot e^{-j 2 \pi \left(\frac{k}{T}\right)(ndt)}

Here it is clear that the "unit of time", the variable ndt, goes from 0 to T-dt, for n from 0 to N-1. And that X is periodic in ndt with period T.

The IDFT:

x(ndt) = \frac{1}{N} \sum_{k=0}^{N-1} X\left(\frac{k}{T}\right)\cdot e^{j 2\pi \left(\frac{k}{T}\right)(ndt)}

I suspect that you rewrote these formulas in this manner so that we see the explicit appearance of the total duration T, that would then be interpreted as the "natural" timescale for the result of the IDFT. This is fine, and this is of course what we all tend to believe. But k/T is just a frequency, and can take on any value we like in a simulation, i.e., lower than 1/T. And does this help to resolve the problem that I find in my simulations where different values of T used in the "proper" way for defining the Nyquist range of frequencies, lead to power spectra that do not agree with each other in the overlapping frequency range?

Another point is that you use the term "spectral index" for k, but I just want to make sure you know that I use this term to describe the index of the power-law underlying the overall power spectrum.

In your last comment you wrote:

Actual "noise" is a summation of an infinite number of non-harmonically related sinusoids. Its Fourier transform is a continuous function of frequency, not the discrete-frequency function that you are synthesizing. You cannot create realistic noise from a finite number of components.

Do you think this is the problem? And that if I generate Fourier components using a lot more frequencies than the independent ones between 1/T and 1/2dt, I will be able to mimic a little better the fact that acutal noise contains an infinite amount of frequency components?

GBelanger (talk) 19 January 2011 (UTC)

[edit] Sandbox

s(t) \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad S(f)\,


S[k] \quad S_{1/T}(f) \quad S_N[k]\,


\underbrace{s(t)*\sum_{k=-\infty}^{\infty} \delta(t-kP)}_{\color{BrickRed}\operatorname{rept}_P(s(t))} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad {\color{OliveGreen}\frac{1}{P}} \underbrace{S(f)\cdot \sum_{k=-\infty}^{\infty} \delta(f-k/P)}_{\color{OliveGreen}\operatorname{comb}_{1/P}(S(f))}\,


\underbrace{s(t)\cdot \sum_{n=-\infty}^{\infty} \delta(t-nT)}_{\operatorname{comb}_T(s(t))} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T} \underbrace{S(f)* \sum_{n=-\infty}^{\infty} \delta(f-n/T)}_{\operatorname{rept}_{1/T}(S(f))}\,


s(t)\cdot \sum_{n=-\infty}^{\infty} \delta(t-nT) = \color{Red}\operatorname{comb}_T(s(t))\,


\frac{1}{T} S(f)* \sum_{n=-\infty}^{\infty} \delta(f-n/T) = \color{Blue}\frac{1}{T} \operatorname{rept}_{1/T}(S(f))\,


\underbrace{{\color{BrickRed}\operatorname{rept}_P(s(t))}\cdot \sum_{n=-\infty}^{\infty} \delta(t-nT)}_{\operatorname{comb}_T({\color{BrickRed}\operatorname{rept}_P(s(t))})} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T}{\color{OliveGreen}\frac{1}{P}} \underbrace{{\color{OliveGreen}\operatorname{comb}_{1/P}(S(f))}* \sum_{n=-\infty}^{\infty} \delta(f-n/T)}_{\operatorname{rept}_{1/T}({\color{OliveGreen}\operatorname{comb}_{1/P}(S(f))})}


\underbrace{\operatorname{comb}_T(s(t))* \sum_{k=-\infty}^{\infty} \delta(t-kP)}_{\operatorname{rept}_P(\operatorname{comb}_T(s(t)))} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{P}\frac{1}{T} \underbrace{\operatorname{rept}_{1/T}(S(f))\cdot \sum_{k=-\infty}^{\infty} \delta(f-k/P)}_{\operatorname{comb}_{1/P}(\operatorname{rept}_{1/T}(S(f)))}


{\color{Red}\operatorname{comb}_T(s(t))}* \sum_{k=-\infty}^{\infty} \delta(t-kP) = \operatorname{rept}_P({\color{Red}\operatorname{comb}_T(s(t))})


\frac{1}{P}{\color{Blue}\frac{1}{T} \operatorname{rept}_{1/T}(S(f))}\cdot \sum_{k=-\infty}^{\infty} \delta(f-k/P) = \frac{1}{P}{\color{Blue}\frac{1}{T}} \operatorname{comb}_{1/P}(
{\color{Blue}\operatorname{rept}_{1/T}(S(f))}
)


Notation:

x(t)\, is a non-periodic function with Fourier transform X(f).\,
P=NT\,  is a time-interval of N samples taken at sample-intervals of time T.
x_P(t)\ \stackrel{\text{def}}{=}\ \sum_{k=-\infty}^{\infty} x(t-kP)  is a periodic summation of x(t).\,
X_{1/T}(f)\ \stackrel{\text{def}}{=}\ \sum_{k=-\infty}^{\infty} X(f-k/T)  is a periodic summation of X(f).\,
x_N[n]\ \stackrel{\text{def}}{=}\ \sum_{k=-\infty}^{\infty} x[n-kN]  is a periodic summation of discrete sequence x[n].\,
\int_P f(t)\,dt   is the integral of f(t)\, over any interval of length P.
\sum_{N} f[n]\,  is the summation of sequence f[n]\, over any interval of length N.
Continuous-time transforms
Any duration Finite duration or periodic
Transform X(f)\ \stackrel{\text{def}}{=}\ \int_{-\infty}^{\infty} x(t)\ e^{-i 2\pi f t}\,dt \underbrace{\frac{1}{P}\cdot X\left(\frac{k}{P}\right)}_{X[k]}\ \stackrel{\text{def}}{=}\ \frac{1}{P} \int_{-\infty}^{\infty} x(t)\ e^{-i 2\pi \frac{k}{P} t}\,dt \equiv \frac{1}{P} \int_P x_P(t)\ e^{-i 2\pi \frac{k}{P} t}\,dt
Inverse x(t) = \int_{-\infty}^{\infty} X(f)\ e^{ i 2 \pi f t}\,df \underbrace{x_P(t) = \sum_{k=-\infty}^{\infty} X[k] \cdot e^{i 2\pi \frac{k}{P} t}}_{\text{Poisson summation formula}}
Discrete-time transforms
Any duration Finite duration or periodic
Transform \underbrace{X_{1/T}(f) = \sum_{n=-\infty}^{\infty} \overbrace{T\cdot x(nT)}^{x[n]}\cdot e^{-i 2\pi f nT}}_{\text{Poisson summation formula}} \underbrace{\overbrace{X_{1/T}\left(\frac{k}{NT}\right)}^{X_k} = \sum_{n=-\infty}^{\infty} x[n]\cdot e^{-i 2\pi \frac{kn}{N}}}_{\text{Poisson summation formula}} \equiv \underbrace{\sum_{N} x_N[n]\cdot e^{-i 2\pi \frac{kn}{N}}}_{DFT}
Inverse x(nT) = \int_{1/T} X_{1/T}(f)\cdot e^{i 2\pi f nT} \,df

\sum_{n=-\infty}^{\infty} x[n]\cdot \delta(t-nT) = \underbrace{\int_{-\infty}^{\infty} X_{1/T}(f)\cdot e^{i 2\pi f t}\,df}_{\text{inverse Fourier transform}}

x_N[n] = \underbrace{\frac{1}{N} \sum_{N} X_k\cdot e^{i 2\pi \frac{kn}{N}}}_{\text{inverse DFT}}

x_P(nT) = \frac{1}{T}\cdot x_N[n] = \sum_{N} \underbrace{\frac{1}{NT}\cdot X_{1/T}\left(\frac{k}{NT}\right)}_{X_{1/T}[k]} \cdot e^{i 2\pi \frac{kn}{N}}

Personal tools
Namespaces

Variants
Actions
Navigation
Interaction
Toolbox
Print/export