By: Robert Myers (rbmyersusa.delete@this.gmail.com), October 20, 2012 11:23 am

Room: Moderated Discussions

Emil Briggs (me.delete@this.nowherespam.com) on October 20, 2012 10:04 am wrote:

>

> Robert I don't know exactly what sort of

> background you have in these types of calculations so please forgive me if I

> start at too low a level here. The DFT acronym refers to

>

> DFT calculations

> are accurate because they originate from quantum mechanical first principles. QM

> is non local and the wavefunctions associated with the electrons extends over

> all space. In practice we can truncate them at some point or use periodic

> boundary conditions to limit the region of interest but as we increase the

> number of atoms the size of the region we have to address increases in a roughly

> linear manner. So we pick up one factor of N. But as the number of atoms

> increases the number of electrons increases as well and we pick up another

> factor of N. Finally the orthogonality constraint for the wave functions gives

> us another factor and we're at N^3.

>

> There are a variety of techniques to

> solve the resultant equations (referred to as the Kohn-Sham equations). Some of

> them use FFT's. Others use finite difference methods. The FFT based methods have

> scaling issues on computer systems with poor interconnect bandwidth. The finite

> difference methods can do better on such systems.

>

> In either case though the

> N^3 scaling of the algorithm with the number of atoms is the real fundamental

> limit. Even a system with with enough interconnect bandwidth to provide good FFT

> performance on thousands of nodes would still run into this problem. Hence the

> work on finding algorithms that don't have that limit.

Well, at least the mystery of the amazing FFT has been resolved. In my world, DFT is used as an acronym for Discrete Fourier Transform, and it is usually used when, for any number of reasons, the cleverness of the FAST Fourier Transform (FFT) is not available. We are agreed that FFT's do not scale well with limited global bandwidth.

I just happen to have a copy of Kittel here, but the Kohn-Sham equation is not in the index. Fortunately, there is a Wikipedia article on the subject, so now I know everything I need to know about the subject (that's a joke). I suspect that, if I knew more about the subject than I do, I'd have many of the same concerns about what you are doing as I do about many of the things that are done in CFD, but it's not my discipline, and I don't have the competence to engage in a meaningful discussion.

As with the flapping wing problem, sometimes it doesn't matter that the actual mathematics are subtle and very difficult. Sometimes even a crude approach will provide the insight you need. Your post points to the really fundamental problem, though. For many problems in actual three-dimensional space, adding zillions of processors and memory location buys you very little, because the scaling with some N is so daunting. I claim that that inescapable reality only strengthens my argument. Brute force isn't going to get us anywhere. We should be focusing on computers that are reasonably fast, don't require ugly approximations (and all finite difference approximations are ugly, as far as I'm concerned), and are as transparent as possible to the user. Pushing things just a bit further in the problem that interests me may offer us no further useful insight, in which case my decision not to be so interested in warehouse-sized computers is the right one. Everything you say indicates to me that, aside from your professional investment so far, your thinking should be headed in the same direction.

Robert.

>

> Robert I don't know exactly what sort of

> background you have in these types of calculations so please forgive me if I

> start at too low a level here. The DFT acronym refers to

**Density Functional**

> Theoryand is not related to fourier transforms of any kind.> Theory

>

> DFT calculations

> are accurate because they originate from quantum mechanical first principles. QM

> is non local and the wavefunctions associated with the electrons extends over

> all space. In practice we can truncate them at some point or use periodic

> boundary conditions to limit the region of interest but as we increase the

> number of atoms the size of the region we have to address increases in a roughly

> linear manner. So we pick up one factor of N. But as the number of atoms

> increases the number of electrons increases as well and we pick up another

> factor of N. Finally the orthogonality constraint for the wave functions gives

> us another factor and we're at N^3.

>

> There are a variety of techniques to

> solve the resultant equations (referred to as the Kohn-Sham equations). Some of

> them use FFT's. Others use finite difference methods. The FFT based methods have

> scaling issues on computer systems with poor interconnect bandwidth. The finite

> difference methods can do better on such systems.

>

> In either case though the

> N^3 scaling of the algorithm with the number of atoms is the real fundamental

> limit. Even a system with with enough interconnect bandwidth to provide good FFT

> performance on thousands of nodes would still run into this problem. Hence the

> work on finding algorithms that don't have that limit.

Well, at least the mystery of the amazing FFT has been resolved. In my world, DFT is used as an acronym for Discrete Fourier Transform, and it is usually used when, for any number of reasons, the cleverness of the FAST Fourier Transform (FFT) is not available. We are agreed that FFT's do not scale well with limited global bandwidth.

I just happen to have a copy of Kittel here, but the Kohn-Sham equation is not in the index. Fortunately, there is a Wikipedia article on the subject, so now I know everything I need to know about the subject (that's a joke). I suspect that, if I knew more about the subject than I do, I'd have many of the same concerns about what you are doing as I do about many of the things that are done in CFD, but it's not my discipline, and I don't have the competence to engage in a meaningful discussion.

As with the flapping wing problem, sometimes it doesn't matter that the actual mathematics are subtle and very difficult. Sometimes even a crude approach will provide the insight you need. Your post points to the really fundamental problem, though. For many problems in actual three-dimensional space, adding zillions of processors and memory location buys you very little, because the scaling with some N is so daunting. I claim that that inescapable reality only strengthens my argument. Brute force isn't going to get us anywhere. We should be focusing on computers that are reasonably fast, don't require ugly approximations (and all finite difference approximations are ugly, as far as I'm concerned), and are as transparent as possible to the user. Pushing things just a bit further in the problem that interests me may offer us no further useful insight, in which case my decision not to be so interested in warehouse-sized computers is the right one. Everything you say indicates to me that, aside from your professional investment so far, your thinking should be headed in the same direction.

Robert.