Multiview depth imagery will play a pivotal role in free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery at different viewpoints is used to synthesize an arbitrary number of novel views. Usually, depth images at multiple viewpoints are estimated individually by stereo-matching algorithms and, hence, show lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. We propose depth consistency testing to enhance the depth representation of a scene across multiple viewpoints by exploiting the resulting information from consistency testing. Furthermore, we propose a view synthesis algorithm based on that consistency information to improve the visual quality of virtual views at arbitrary viewpoints. For general multiview depth imagery, we observe that the consistency of depth pixel values is varying spatially. Again, we use consistency information to define clusters and sub-clusters of depth pixels. We observe that our clustering approach is able to enhance the depth information for real-world scenes. In combination with our consistency-adaptive view synthesis, we improve the visual experience of the free-viewpoint user. The experiments show that our approach improves the objective quality of virtual views by up to 1 dB. The advantage for the subjective quality is also demonstrated.