This paper presents an approach for view-based recognition of gestures. The approach is based on representing each gesture as a sequence of learned body poses. The gestures are recognized through a probabilistic framework for matching these body poses and for imposing temporal constrains between different poses. Matching individual poses to image data is performed using a probabilistic formulation for edge matching to obtain a likelihood measurement for each individual pose. The paper introduces a weighted matching scheme for edge templates that emphasize discriminating features in the matching. The weighting does not require establishing correspondences between the different pose models. The probabilistic framework also imposes temporal constrains between different pose through a learned Hidden Markov Model (HMM) for each gesture.