Journée Honoris Causa
Amphithéâtre
Louis Antoine
Campus de Beaulieu Rennes
To check the presentations of the Honoris Causa's Seminar, you must use a free recent version of the player Real
Note: these presentations are not the final version. We have wished to show them quickly in order to underline the importance of this seminar
Alan
Willsky
Edwin S. Webster professor of electrical engineering, MIT,
Cambridge, USA
Graphical models, distributed fusion and sensor networks
Listen to the audio ; see the video ; the slides (pdf)
Presentation
In
this talk we provide a picture of one group’s journey through a set
of related research topics and lines of inquiry. The precursors to this journey
begin with some early work on Markov random fields (MRF) and image processing
and, more particularly, with the search for models and model classes that
have structure that yields both insight and computationally powerful algorithms.
The real point of departure for this talk, however, begins with the author’s
joint work (beginning in 1988) with Albert Benveniste, Michelle Basseville,
and Ramine Nikoukhah of IRISA and INRIA on multiresolution models defined
on trees. We provide a brief overview of the nature of the results from that
collaboration and from its continuation after the author’s return from
his 1988 sabbatical in Rennes (including some work with Khalid Daoudi of INRIA
that finally realized part of the original research agenda posed at the state
of 1988). We’ll also briefly talk about the many applications these
results have found, and then will turn to work that we’ve pursued fueled
by the limitations of models defined on trees rather than on more general
graphs.
Markov models defined on graphs with loops is a very rich and active field,
finding applications in a surprisingly wide array of disciplines, and challenging
theoreticians and algorithm developers to devise methods that are both computationally
tractable and high-performance. We provide a picture of some of our contributions
in this area, all of which build (in one way or another) on our work on models
defined on trees but that also make explicit contact with the rich class of
so-called “message-passing” algorithms (such as the celebrated
Belief Propagation” (BP) algorithm) for graphical models. Among the
contributions we will mention are so-called “embedded tree” algorithms
for the efficient and exact solution of a nontrivial class of Gaussian MRFs;
“tree-reparameterization” algorithms which lead to a number of
theoretical results on graphical models; a new recursive cavity modeling (RCM)
algorithm that blends tree-based estimation with ideas in information geometry
to lead to algorithms that allow scalable solution of very large estimation
problems; the concept of “walk-sums” for graphical models and
the new theoretical results they admit for belief propagation; and an approach
we call “Nonparametric Belief Propagation” that involves a nontrivial
extension of the idea of particle filtering to message-passing algorithms.
We also describe our growing investigation of distributed fusion algorithms
for sensor networks, in which there is a natural graph associated with network
connectivity, as well as possibly two other graphs: one, relating the variables
that are sensed and those that are to be estimated and a second relating the
sources of information to the desired “sinks” (i.e., to nodes
with responsibility for certain actions). We are still early in this investigation,
but we describe several results including some on what we call “message-censoring”
in which a sensor decides not to send a BP message, in which empirical studies
motivated a theoretical investigation into the propagation of messaging errors
in BP, a study that has also produced the as-yet tightest results for BP convergence.
We also describe our results on efficient communication of messages and the
tradeoff between communication load and performance and on sensor resource
management in which we take into account not just the power cost of taking
a measurement and communicating a message but also of dynamically “handing
off” responsibility for estimation from one node to another. Further,
in some initial work on the rapprochement of message-passing algorithms and
decentralized detection, we describe the fact that an important component
of sensor network activity is “self-organization” and describe,
for a simple scenario, how the solution to a team-decision problem can (a)
be solved via a message-passing algorithm; and (b) leads to what can be thought
of as a network protocol coupling the physical and application layers.
We close the talk with some brief comments on several paths ahead, making
it clear that there are many more things that we do not yet understand compared
to those that we already do.
The WilliamSussman
professorial chair, dept.of computer science and applied mathematics, The
Weizmann Institute of Science, Rehovot, Israel
Computers are not omnipotent
Listen to the audio ; see the video ; the slides (pdf)
Presentation
In
1984, TIME magazine quoted the chief editor of a certain software publication
as saying:
"Put the right kind of software into a computer, and it will do whatever you
want it to. There may be limits on what you can do with the machines themselves,
but there are no limits on what you can do with software."
This talk will survey results obtained over the last 75 years by mathematicians,
logicians and computer scientists, which disprove this ignorance-based statement
in a sweeping and fundamental way. We shall discuss problems that are provably
non-computable, as well as ones that are hopelessly time- or memory-consuming
(requiring far more time than has elapsed since the Big Bang, or requiring
a computer would not fit into the entire known universe). We will also take
a somewhat more amusing look at these facts, and relate them to the (im)possibilities
of true artificial intelligence.
Colloquium
30 ans
Irisa
Amphithéâtre P
|
|
|
Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful tool now being used together with a variety of techniques to study the structure of complex molecules of interest in biology.The experimental techniques and signal processing used in this field are directed toward the identification of very complex linear systems, systems that must be probed with inputs to coax out the most informative responses. As a by-product we shed new light on the use of feedback in the more standard linear system identification problems found in engineering. |
|
|
|
Natural Computing is a dynamic and fast growing research area concerned with computing taking place in nature and with human-designed computing inspired by nature. A popular and important topic in the former line of research is computation in living cells, and a good example of this research is the investigation of gene assembly in ciliates. Ciliates, a very ancient group of unicellular organisms, have evolved extraordinary ways of organizing, manipulating and replicating the DNA in their micronuclear genomes. The way that ciliates
transform genes from their micronuclear (storage) form into their macronuclear
(expression) form is the most involved DNA processing in nature that
is known to us. This gene assembly process is also very interesting
from the computational point of view.Current
research concerning gene assembly is an example of flourishing interdisciplinary
research involving computer scientists and molecular biologists.
|
|
|
|
In the last decade, wavelets have lead to efficient image compression algorithms now standardized in JPEG2000. In this talk we will discuss how wavelets and their generalizations -- so called sparse atomic decompositions -- can help solve challenging problems in 'computer audition': the spatial localization of several musical instruments from a stereophonic recording, and the separate extraction of the sound of each instrument for focused listening. |
|
|
|
Global optimization problems arise in a wide range of applications, from combinatorial optimization problems such as the traveling salesman problem, to continuous and stochastic problems that arise in manufacturing and telecommunications, such as inventory control and buffer allocation. We present a randomized algorithm called Model Reference Adaptive Search (MRAS) for solving both continuous and combinatorial global optimization problems. At each iteration, the algorithm generates a group of candidate solutions according to a parameterized probabilistic model. These candidate solutions are then used to update the parameters associated with the probabilistic model in such a way that the future search will be biased toward the region containing high quality solutions. The parameter updating procedure in MRAS is guided by a sequence of implicit reference models that will eventually converge to a model producing only the optimal solutions. We establish global convergence of MRAS in both continuous and combinatorial domains. Numerical studies are also presented to demonstrate the effectiveness of the algorithm. |
|
|
|
In three classical papers published between 1979 and 1982, Harel, together with Chandra, laid the foundations for the theory of relational queries. In particular, these papers highlighted the centrality of fixpoint queries defined by means of function-free Horn clauses, which became known as Datalog queries. The study of Datalog was a major theme in database-theory research from the mid 1980s to the mid 1990s. Unfortunately, most decision problems involving Datalog turned out to be undecidable. Furthermore, well-behaved fragments of Datalog turned out to be rather restrictive and unnatural. Surprisingly, a natural and quite general fragment of Datalog did emerge in the mid 1990s, in the context of semistructured data. This fragment is the class of regular queries, whose basic element is that of regular path query. In this talk I will describe the class of regular queries and its well-behavedness. |
|
|
|
For the last 50 years, the computer architects have been able to exploit the advances of integration technology to design more and more powerful uniprocessors. Memory hierarchy, pipelined architectures, instruction level parallelism and speculative execution are the techniques that have allowed the uniprocessor industry to double the processor performance every two years (the famous Moore's law). For a long time, due to longer design cycles and the lack of efficient software, multiprocessors were not very effective. However, for the last few years, increasing the processor complexity (e.g. using deeper pipelines or more instruction level parallelism) has poor performance return. Therefore, processor manufacturers have begun to implement thread level parallelism on a single chip, often mixing simultaneous multithreading and multicores. In this presentation, we recall the concepts that have been driving the uniprocessor architecture advances for the past 30 years. Then we describe the thread parallelism implementations that are appearing on-chip. Finally, as parallel software is frequently the missing piece, we suggest some research directions to leverage parallel hardware on sequential software. |
|
|
|
It is well-known that the minimum achievable rate for lossless compression of two statistically dependent memoryless sources is given by the joint entropy of the two sources. To approach this rate bound, practical systems, e.g., video compression systems, encode and decode the two sources jointly. However, Slepian and Wolf have established in the 70s that this lossless compression rate bound could also be approached with a vanishing error probability for long sequences, when encoding the two sources separately and decoding them jointly. This theorem has led to a new coding paradigm known as distributed source coding. The lossy equivalent of the Slepian-Wolf theorem has been formulated later by Wyner and Ziv. Although these information-theoretic results have been known for the past 30 years, it is only recently that practical solutions have been explored for applications ranging from video compression, resilient video transmission, to minimization of transmit energy in sensor networks. In this talk, we will focus on the problem of distributed compression highlighting the potential for video compression and wireless uplink transmission. This talk will first revisit the main principles of coding/decoding with side information, of distributed source coding, and will present some constructive codes to approach the Slepian wolf and Wyner-Ziv bounds. The talk will then illustrate the application of the distributed coding paradigm to video compression, leading to a shift of complexity from encoder to decoder, desirable for a class of transmission scenarios in wireless video and surveillance systems, as well as to some inherent resilience to channel impairments. |