Summer University Courses

Multiprocessor Programming

Prof. Burkhard Englert, California State University at Long Beach, California.

Content
In recent years the computer industry has been undergoing a vigorous shake-up. For the time being the major chip manufacturers have given up trying to make processors run faster. Transistor clock speeds cannot be increased anymore without overheating. So manufacturers are turning to « multicore » architectures in which multiple processors (cores) communicate directly through shared hardware caches. Multicore chips make computing more effective by exploiting parallelism: harnessing multiple processors to work on a single task.

In this course we will study principles and practice of multiprocessor programming. In principles we will focus on what can be computed in an asynchronous concurrent environment. In practice we will experiment with the Intel Manycore Testing Lab. Intel Corporation has set up a special remote system that allows faculty and students to work with computers with lots of cores, called the Manycore Testing Lab (MTL). We will create programs that intentionally use multi-core parallelism, upload and run them on the MTL, and explore the issues in parallelism and concurrency that arise.

Bibliography
The Art of Multiprocessor Programming by Herlihy/Shavit ISBN: 978-0-12-370591-4

Constraint Programming

Prof. Todd Ebert, California State University at Long Beach, California.

Constraint programming represents both a problem-solving and programming paradigm that allows one to define a problem by way of writing a constraint program. which involves defining one or more variables, and a set of constraints that the variables must satisfy. The goals of constraint programming are to provide the user with a richly expressive language for defining the problem, and provide algorithmic support for automatically deriving solutions to the problem.

Students will be introduced to the foundations of constraint programming, and to the Frega constraint programming language; a general-purpose interpreted language that attempt to meet the above-stated goals. Students will work in groups for the purpose of solving challenging constraint-satisfaction problems with the help of Frega.

Course Topics
1. Overview of Constraint Programming
2. Examples of constraint-satisfaction problems
3. Review of set theory, logic, and relations
4. Constraint networks; binary, minimal, and projection networks
5. Arc and path consistency
6. A survey of different types of constraints; including Boolean and relational constraints
7. Introduction to the Frega programming language
8. Introduction to local search strategies: genetic algorithms, simulated annealing and tempering, and local projections

Course Prerequisites
At least one semester of discrete mathematics, one semester of data structures and algorithms, and two semesters of introductory programming. Students should have Junior or Senior standing.

Course Materials
Students will be provided access to a linux lab that has a Frega interpreter available on all lab machines.

Course Grading
Student evaluation will be based on one exam (40%) and three lab projects (20% each)

Bibliography
R. Dechter, « Constraint Processing », Morgan Kaufmann, 2003
G. Fishman, « A First Course in Monte Carlo », Duxbury Press, 2005
A. Eiben, J. Smith, « Introduction to Evolutionary Computing », Springer, 2010

Computer Graphics Applications

Prof. Soon Tee Teoh, San Jose State University, California, USA

The class will begin by presenting foundational concepts in 2D and 3D Computer Graphics. Then, students will be introduced to a variety of interesting current applications such as visualization, advanced graphics techniques such as ray-tracing, and recent innovations such as programmable shaders. Procedural modeling, an ongoing research topic, will also be covered.

Topics
Basic Graphics Programming in OpenGL

3D Graphics Basics (Projection and Lighting)
Scientific Visualization (Volume Rendering)
Information Visualization
Global Illumination Methods (Ray-Tracing and Radiosity)
Programmable Shaders
Procedural Modeling

Assessment
Students will be assessed by a programming project assignment and an exam.

Equipment
Students should bring their own laptops with C/C++, C# or Java installed. C/C++ is preferred.

Prerequisites
Substantial programming experience, preferably in C/C++.

Scala for the impatient

Prof. Cay S. Horstmann, San Jose State university, California.

Scala is a statically typed, object-oriented/functional programming language for the Java and .NET Virtual Machines that many consider the “next Java”. Scala is sometimes described as complex and hard to learn, but actually the opposite is true for everyday programming. The complex features enable library designers to build libraries that are easy to use. The result is a language that is more productive, more extensible, and more enjoyable than Java. In this course, you will learn how to use Scala for common programming tasks and for building domain-specific languages (DSLs), small embedded languages for special-purpose computations. In keeping with the overall theme of this year’s summer school, we will focus on DSLs for graphics and animation.

Prerequisites
(1) Object-oriented programming in C++ or Java.
(2) A laptop for the labs.

Image Processing

Prof. Michel Kocher, HEIG-Vd, Switzerland

In this course 2 major approaches of image processing will be developed. First, the linear approach based on the Signal Processing theory. This methodology is based on linear models, convolution, Fourier transform, correlation and filter design. Second, a non-linear approach provided by the Mathematical Morphology framework. This paradigm, developed in France about 50 years ago only used minimum and maximum operators as well as intersection, union and negation. It produces astonishing good results, very robust to noise and is, in certain cases a very interesting alternative to the linear approach.
These 2 methodologies will be described in theory and with numerous applications taken from the industrial world as well as from the biomedical domain. The students will apply the described algorithms by the help of the Matlab language and the Image Processing toolbox. At the end of the course, the students will be able to describe an image processing problem in term of different algorithms. They will also be able to program some of them and to compare them in term of speed performance (complexity) and noise robustness.

Prerequisites
Matlab (knowledge + software), good notions of signal processing

Introduction to CUDA programming

Prof. Stephan Robert, HEIG-Vd, Switzerland

There was a time when parallel computing was used for specific applications and therefore reserved for an elite in computer science. This perception has changed in recent years. Nearly all consumer computers in this year (2011) will ship with multicore central processors. From the introduction of dual core netbook machines to 8- and 16-core workstations computers, no longer will parallel computing be relegated to exotic supercomputers. In comparison to central processing units (CPU) traditional data processing, performing general purpose computations on a graphic processing unit (GPU) is a new concept. For 3D graphical purposes Nvidia began to release graphics accelerators (Geforce 3 serie) that were affordable enough to attract attention. In the beginning only OpenGL and DirectX were available to program GPUs but to their limitations for general purpose computing (constraints of programming within a graphic API) , Nvidia made public a “new” language, CUDA C (largely inspired by C), adapted for its CUDA architecture. Users are not longer required to have any knowledge of the OpenGL or DirectX graphics programming interfaces. From 2007, a variety of applications have enjoyed a great deal of success, due to orders-of-magnitude performance improvement, in medical imaging, finance, movies special effects, computational fluid dynamics, environmental science,… After a concise introduction to the CUDA plateform and architecture we will detail the techniques associated with CUDA features. We will learn how to write software that delivers outstanding performance.

Content
GPUs and CPUs
Basics
Architecture differences
Introduction to CUDA C
Getting started: Installation of the tools
My first program “Hello, World!”
Parallel Programming in CUDA C
Thread cooperation
GPU Memory (constant, texture)
Control flow and synchronization (steams, atomics)
CUDA Tools (CUFFT, CURAND, CUBLAS,…)
Tackeling a new application

Prerequisites
C-Programming (and notions of C++)
Laptop, possibly with a NVIDIA GPU.

Lectures
Lecture 1: Introduction
Lecture 2: Getting started
Lecture 3: Parallel Programming in CUDA C
Lecture 4: Threads
Lecture 5: Constant Memory and Events
Lecture 6: Atomics
Lecture 7: Streams

Exercices
Exercices 3 and 4, instructions
Ex3: cudaMallocAndMemcpy.cu
Ex4: MyFirstKernel.cu

Project
Queueing project based on Markovian sources

Bibliography and links
CUDA homepage
Nvidia Developer’s Zone
Cuda SDK examples
CUDA by example, an introduction to General-Purpose GPU Programming, J. Sanders and E. Kandrot, Addison Wesley. (main text for this course)
Programming Massively Parallel Processors: A Hand-on Approach, Kirk, Hwu, Morgan Kaufman (1,2,3,4,5,6,7)
Nvidia Cuda C Programming Guide, version 3.2, 7/21/2010
Nvidia Cuda C, Best Practices Guide, July 2009
Nvidia Cuda Reference Manual, version 3.2, August 2008
Nvidia Cuda Development Tools 2.3, Getting Started (Windows), July 2009
CUFFT Library, October 2007
CUBLAS Library, September 2007
CURAND Library, August 2010
CUSPARSE Library, August 2010
Accelerating MATLAB using Cuda, September 2007

Tools
CUDA Toolkit 4.0

Multi-core and concurrent programming

Prof. Partha Dasgupta, Arizona State University

Computer CPUs have made the move to multi-core architecture due to issues with fabrication, power management, performance and packaging. Given that each core of modern CPUs have the same or less compute power than older single-core CPUs, to improve application performance it is necessary to use more than one CPU to perform each task. Hence the need for “parallel processing” within applications. This course is designed to teach the concepts of parallel programming using the shared memory and the distributed memory architectures of modern CPUs. The course covers the techniques and APIs used for managing multiple threads within an application, in both shared memory and distributed memory systems.

  • Part 1: Multicore Architectures (UMA, NUMA, CC-NUMA and the impact on software)
  • Part 2: Concurrency and Parallelism (The similarities and differences in concurrency and parallelism in programs)
  • Part 3: Scheduling and performance (How scheduling affect performance. How granularity depends on architectures)
  • Part 4: Shared Memory Programming (Race Conditions, Critical Sections, Syncrohization and programming techniques)
  • Part 5: Distributed Memory Programming (Message passing, RPC, and Programming techniques)
  • Part 6: Massively Parallel Architectures and Programming (Nvidia and CUDA). Linear and Non-Linear approaches to Image Processing

Prerequisites
Programming, Data Structures, Basic knowledge of Computer Architecture and Operating Systems..

Evaluation
lab assignments, final report

Web Technologies

Prof. Alvaro Monge, California State University Long Beach

This course will study primarily client-side web technologies: HTML, XHTML, CSS, and Javascript. The focus will be on accessibility of content and the use of W3C (World Wide Web Consortium) recommendations. Depending on the schedule, we may also learn some basic server-side technologies to generate dynamic content.

Prerequisites
At least one year of programming (C++, Java)

Evaluation
Homework, quizzes, project

Java EE6 for Elvis

Prof. Cay S. Horstmann, San Jose State university, California.

In this course, you will learn how to easily develop web and database applications in Java EE 6. Why Elvis? Microsoft classifies developers as Mort (can click a mouse and code in Basic), Elvis (competent and pragmatic programmer) and Einstein (self-explanatory). Java EE used to be so hard that only Einstein could use it, but it has now been dramatically simplified and is accessible to Elvis. Unlike Mort, who labors with ASP.NET or PHP, Elvis gets robustness and scalability without having to reinvent the wheel, thanks to the EE platform. Topics covered: JSF, web components, templating, Ajax, JPA, transactions, clustering, internationalization, security.

Required background
Basic Java programming, basic knowledge of HTML and SQL. A laptop with Eclipse or Netbeans.

Machine Intelligence

Prof. Andres Perez-Uribe, HEIG-VD, Switzerland.

The idea of building machines with some sort of intelligence has been present at least since the ancient Greek mythology. With the advent of the computer, around 50 years ago, the Artificial Intelligence (AI) domain was born adopting a symbol manipulation approach that has enabled us to achieve great exploits like beating the best chess player in history. Nevertheless, most of us will agree that Deep-blue is not an intelligent machine. How then, are we going to build intelligent machines ?

During this course we will move from the symbolic paradigm (so-called Good Old-Fashioned AI) onto a connectionist paradigm, whereby parallel distributed processing models are implemented using neurally inspired systems (e.g., artificial neural networks), and then onto an embodied intelligence paradigm, where the emergence of intelligence is thought to be importantly driven by the interaction between the system (e.g., a robot) and its environment.

The course will consist of lectures and laboratory exercises, whereby the students will be able to use and understand the workings of relevant examples of the three mentioned AI paradigm.

Lectures
1. Machine intelligence introduction
2. Artificial Intelligence
3. Turing test of intelligence
4. New AI
5. Autonomous learning robots
6. Machiavellian intelligence

Laboratory exercises
1. GOFAI: Good Old-fashioned Artificial Intelligence
2. Robot programming
3. Supervised learning artificial neural networks
4. Teaching a robot how to behave in an unknown environment

Recommended knowledge
Data structures and algorithms, Java, C, elementary vector math and differential calculus

Linear and Non linear Image processing

Prof. Michel Kocher, HEIG-Vd, Switzerland

In this course, 2 major approaches of image processing will be developed.

First, the linear approach based on the Signal Processing theory. This methodology is based on linear models, convolution, Fourier transform, correlation and filter design.

Second, a non-linear approach provided by the Mathematical Morphology framework. This paradigm, developed in France about 50 years ago only used minimum and maximum operators as well as intersection, union and negation. It produces astonishing good results, very robust to noise and is, in certain cases a very interesting alternative to the linear approach.

These 2 methodologies will be described in theory and with numerous applications taken from the industrial world as well as from the biomedical domain. The students will apply the described algorithms by the help of the Matlab language and the Image Processing toolbox. At the end of the course, the students will be able to describe an image processing problem in term of different algorithms. They will also be able to program some of them and to compare them in term of speed performance (complexity) and noise robustness.

Introduction to Ubiquitous Computing

Prof. Olivier Liechti, HEIG-Vd, Switzerland.

Ubiquitous computing is a term coined by Mark Weiser to describe the third wave of computing. The idea is that people will increasingly interact with very diverse computing devices: mobile devices, interactive surfaces, sensors, etc. These devices are embedded in the environment and almost seem to « disappear ». As a result, the interaction between humans and technology becomes more natural and more effective.

In this course, we will provide an introduction to ubiquitous computing and study topics such as ambient interfaces, context-aware computing and location-based services. We will address them from various perspectives, to highlight that ubiquitous computing is a multi-disciplinary field (human-computer interaction, computer mediated communication, distributed systems, middleware, etc).

Introduction to Bioinformatics

Prof. Sami Khuri, San Jose State University, California.

Web page of this course:
http://www.cs.sjsu.edu/~khuri/Yverdon_2010/

Eco-computing

Prof. Jon Pearce, San Jose State University, California.

Web page of this course

The NetLogo language is based on an intriguing eco-oriented paradigm: virtual turtles swimming around in a virtual pond. Turtles are provincial. They know nothing of the pond beyond their limited field of vision. A turtle’s behavior is determined by a list of simple procedures that it perpetually executes. Turtles eat, mate, age, and die. They cheat and cooperate. They buy and sell. They hunt and flee. They spread rumors and diseases. They imitate their neighbors. Oh, I almost forgot, they can also draw.

Although turtles are provincial and their behavior simple, the behavior of the ecosystem as a whole (pond + turtles) can be surprisingly complex. Patterns emerge: self regulatuion, self organization, boom and bust cycles, synchronicity, flocking, rebelling, tipping points, evolution, even standards of morality.

NetLogo can be viewed as a laboratory for studying the emegent behavior of agent-based systems. Its ease of use (NetLogo is based on Logo, which was designed for children) makes it popular among biologists, economists, sociologists, chemists, physicists, and artists. Agent-based architectures are also interesting to computer scientists attempting to leverage massively parallel systems while avoiding the complexity of centralized control.

In this course we will use NetLogo to model complex systems. We will also explore the eco-oriented paradigm as an approach to games, ambient computing, and grid computing.

Spatial localization and identification of objects based on video streams

Dr. Julien de Siebenthal, HEIG-Vd, Switzerland

We will target a complete pipeline allowing to locate and identify passive markers used to overlay 3D objects.

(1) introduction to video camera :
– basic optics, basic radiometry, geometric image formation
– spatial sampling, noise estimation & filtering
– camera parameters

(2) image features :
– edge detection
– Hough transform for lines and curves
– ellipse fitting

(3) recognition (identification) :
– interpretation trees
– invariants

(4) locating image in space & 3D :
– matching from intensity data (perspective, weak perspective)
– 3D overlay

Laboratory environment
C/C++ code, OpenCV, OpenGL, ARToolkit

Laboratory exercices
– image filtering : noise estimation
– edge detection & Hough transform for lines
– locating image in space : perspective modelRequirements:

Laptop, C compiler (Visual 9 Express)

Reference book
Introductory Techniques for 3-D Computer Vision, E. Trucco, A. Verri, 1998 Prentice Hall

Link: http://en.wikipedia.org/wiki/ARToolKit