The multi-core programming challenge

Daniel Cooke
Paul Whitfield Horn Professor
Computer Science Department
Texas Tech University

Abstract:

History has shown that new languages typically penetrate computing practice in the context of a significant change in computing hardware. The multi-core is considered to be one of the most radical hardware changes in history. Apart from its power to dramatically enhance computing performance, the multi-core is likely to impact our every day life. Parallel processing systems have been and are time consuming and expensive to build and require a significantly higher level of expertise to develop. With the advent of multi-core systems, the average programmer will need to develop this higher level capability. There is a widely held belief that to exploit the full potential of the multi-core systems new languages and computing practices must be developed.

To date parallel programming has been the exclusive purview of so-called extreme programmers. They face daunting challenges not faced by the sequential programmer. They must effectively identify, expose, and express parallelisms. These challenges faced by the parallel programmer are due to an additional dimension of pitfalls one must avoid. First there are significant errors (like race, deadlock, and starvation) that occur only in parallel programs. Nancy Leveson pointed out that testing alone does not necessarily discover these errors due to non-determinism. This was tragically demonstrated in the Therac 25 system. Secondly, a programmer parallelizing a sequential program may introduce new logic errors in the program. In other words the parallel code may not satisfy the specification that satisfied by the sequential program. Thirdly, it is very easy for a na´ve parallel programmer to produce parallel programs that actually perform worse than the sequential versions. This is not necessarily an exhaustive list of pitfalls, but it does point out that transitioning from sequential to parallel programming can have a significant impact on functional and nonfunctional requirements.

In addition to a transition from sequential to parallel programming, many believe that programmers are also likely to transition from procedural or object-oriented programming to functional programming. Many functional languages avoid the need to expose and express parallelisms, requiring the programmer to simply identify parallelisms. Furthermore, the functional paradigm facilitates formal verification by treating computation as the evaluation of mathematical functions and by avoiding state maintenance reducing side effects. Thus, the emergence of modern functional programming languages will play an increasing role in making formal methods for industrial control technologies more cost effective.

In the near future, the average programmer may have to transition from being a procedural programmer to being a functional programmer (i.e., a language paradigm shift) as well as from being a sequential programmer to being a parallel programmer (i.e., yet an additional paradigm shift). We hypothesize that the transition from procedural sequential to functional parallel programming is not a 2N problem (where N is the number of paradigm shifts). It is likely to be at least an N2 problem. It will be of enormous difficulty. In this talk I will identify the issues facing modern programmers and present an overview of a technical solution we have developed.

About the Speaker

Dr. Daniel Cooke serves as the Paul Whitfield Horn Professor of the Computer Science Department at Texas Tech University and as Director of its Center for Advanced Intelligent Systems. Previously, Dr. Cooke served as the Manager of NASA.s Intelligent Systems Program, a national research initiative in computer science aimed at NASA relevant problems. Cooke led the activities to establish the technical content of the program, took it from formulation to implementation, and helped establish the program office, which he headed at NASA Ames Research Center in Mountain View, California.

Since 1990, Daniel Cooke has published more than 95 technical papers in the areas of computer language design and software engineering. He has served as PI or Co-PI on research grants totaling more than $14 million, edited many journal special issues, published a book on Computer Language Design, edited a book on Computer Aided Software Engineering, and served as chair or vice-chair for 19 international conferences or workshops. He currently serves as the Chair of the NASA Ames Research Institute for Advanced Computer Science Scientific Advisory Council, Software Engineering Area Editor for IEEE Computer, the Formal Methods Area Editor of the International Journal of Software Engineering and Knowledge Engineering, and as an editor of the International Journal of Semantic Computing.

Dr. Cooke has been an American Electronics Association Fellow, a MacIntosh-Murchison Faculty Fellow, and held the MacIntosh-Murchison Chair in Engineering at U.T. El Paso. In 1996 he was the recipient of the University of Texas at El Paso's Distinguished Achievement in Research Award. In 2001, Cooke received the NASA Ames Research Center Information Sciences Award for leadership in establishing a Model Strategic Research Initiative for NASA. In 2002, he received the NASA Exceptional Achievement Medal and the NASA Group Award, for Contributions to the CICT program. In 2006 he was the recipient of the IEEE Computer Society.s Technical Achievement Award for work on SequenceL.

Dr. Cooke discovered two new computational laws upon which computing can be based, leading to the language SequenceL. SequenceL has been used to prototype Guidance, Navigation, and Control Systems for the space shuttle and the crew exploration vehicle. A byproduct of the laws is the identification of parallelisms inherent in a problem solution. In June, 2009 Texas Multicore Technologies, Inc was founded to commercialize a SequenceL to multicore compiler. The company is now working with leading software and hardware companies to improve their ability to parallelize their codes for multicore processing.