Face modeling is an important step in model based face reconstruction. A method adapting the facial wireframe model from two face images(front one and side one) with some user's interaction is proposed in this paper. At first the place of the front face and the feature regions in the front face image are located by region growing and template matching
the deformable templates are used to extract the full facial features. Secondly the exact front positions of features are rectified by hand with a friendly interface
the depth positions of the feature points are defined from side image manually. Finally the rotation of the head is calculated and the model is scaled and the other vertexes of the model are adapted with inverse distance interpolation algorithm with the feature points as the data points and then the input face model is gotten. The test results show that this method is simple and useful.